Jul 12 00:24:39.008743 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 12 00:24:39.008780 kernel: Linux version 5.15.186-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Jul 11 23:15:18 -00 2025 Jul 12 00:24:39.008804 kernel: efi: EFI v2.70 by EDK II Jul 12 00:24:39.008819 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x716fcf98 Jul 12 00:24:39.008833 kernel: ACPI: Early table checksum verification disabled Jul 12 00:24:39.008846 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 12 00:24:39.008863 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 12 00:24:39.008877 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 12 00:24:39.008891 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 12 00:24:39.008905 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 12 00:24:39.008923 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 12 00:24:39.008937 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 12 00:24:39.008951 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 12 00:24:39.008965 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 12 00:24:39.008982 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 12 00:24:39.009002 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 12 00:24:39.009016 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 12 00:24:39.009031 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 12 00:24:39.009046 kernel: printk: bootconsole [uart0] enabled Jul 12 00:24:39.009060 kernel: NUMA: Failed to initialise from firmware Jul 12 00:24:39.009075 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 12 00:24:39.009090 kernel: NUMA: NODE_DATA [mem 0x4b5843900-0x4b5848fff] Jul 12 00:24:39.009104 kernel: Zone ranges: Jul 12 00:24:39.009119 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 12 00:24:39.009133 kernel: DMA32 empty Jul 12 00:24:39.009148 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 12 00:24:39.009166 kernel: Movable zone start for each node Jul 12 00:24:39.009181 kernel: Early memory node ranges Jul 12 00:24:39.009196 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 12 00:24:39.009210 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 12 00:24:39.009246 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 12 00:24:39.009263 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 12 00:24:39.009279 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 12 00:24:39.009294 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 12 00:24:39.009309 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 12 00:24:39.009324 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 12 00:24:39.009338 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 12 00:24:39.009353 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 12 00:24:39.009373 kernel: psci: probing for conduit method from ACPI. Jul 12 00:24:39.009388 kernel: psci: PSCIv1.0 detected in firmware. Jul 12 00:24:39.009410 kernel: psci: Using standard PSCI v0.2 function IDs Jul 12 00:24:39.009426 kernel: psci: Trusted OS migration not required Jul 12 00:24:39.009441 kernel: psci: SMC Calling Convention v1.1 Jul 12 00:24:39.009461 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jul 12 00:24:39.009490 kernel: ACPI: SRAT not present Jul 12 00:24:39.009508 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Jul 12 00:24:39.009523 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Jul 12 00:24:39.009539 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 12 00:24:39.009555 kernel: Detected PIPT I-cache on CPU0 Jul 12 00:24:39.009570 kernel: CPU features: detected: GIC system register CPU interface Jul 12 00:24:39.009585 kernel: CPU features: detected: Spectre-v2 Jul 12 00:24:39.009600 kernel: CPU features: detected: Spectre-v3a Jul 12 00:24:39.009616 kernel: CPU features: detected: Spectre-BHB Jul 12 00:24:39.009631 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 12 00:24:39.009651 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 12 00:24:39.009667 kernel: CPU features: detected: ARM erratum 1742098 Jul 12 00:24:39.009682 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 12 00:24:39.009698 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jul 12 00:24:39.009713 kernel: Policy zone: Normal Jul 12 00:24:39.009731 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6cb548cec1e3020e9c3dcbc1d7670f4d8bdc2e3c8e062898ccaed7fc9d588f65 Jul 12 00:24:39.009747 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:24:39.009763 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:24:39.009778 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:24:39.009793 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:24:39.009813 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jul 12 00:24:39.009829 kernel: Memory: 3824460K/4030464K available (9792K kernel code, 2094K rwdata, 7588K rodata, 36416K init, 777K bss, 206004K reserved, 0K cma-reserved) Jul 12 00:24:39.009845 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 12 00:24:39.009860 kernel: trace event string verifier disabled Jul 12 00:24:39.009876 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:24:39.009892 kernel: rcu: RCU event tracing is enabled. Jul 12 00:24:39.009907 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 12 00:24:39.009923 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:24:39.009939 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:24:39.009954 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:24:39.013840 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 12 00:24:39.013883 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 12 00:24:39.013911 kernel: GICv3: 96 SPIs implemented Jul 12 00:24:39.013927 kernel: GICv3: 0 Extended SPIs implemented Jul 12 00:24:39.013943 kernel: GICv3: Distributor has no Range Selector support Jul 12 00:24:39.013958 kernel: Root IRQ handler: gic_handle_irq Jul 12 00:24:39.013973 kernel: GICv3: 16 PPIs implemented Jul 12 00:24:39.013989 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 12 00:24:39.014004 kernel: ACPI: SRAT not present Jul 12 00:24:39.014019 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 12 00:24:39.014035 kernel: ITS@0x0000000010080000: allocated 8192 Devices @400090000 (indirect, esz 8, psz 64K, shr 1) Jul 12 00:24:39.014051 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000a0000 (flat, esz 8, psz 64K, shr 1) Jul 12 00:24:39.014067 kernel: GICv3: using LPI property table @0x00000004000b0000 Jul 12 00:24:39.014086 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 12 00:24:39.014102 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Jul 12 00:24:39.014117 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 12 00:24:39.014133 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 12 00:24:39.014149 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 12 00:24:39.014164 kernel: Console: colour dummy device 80x25 Jul 12 00:24:39.014180 kernel: printk: console [tty1] enabled Jul 12 00:24:39.014196 kernel: ACPI: Core revision 20210730 Jul 12 00:24:39.014212 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 12 00:24:39.014252 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:24:39.014296 kernel: LSM: Security Framework initializing Jul 12 00:24:39.014316 kernel: SELinux: Initializing. Jul 12 00:24:39.014332 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:24:39.014348 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:24:39.014365 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:24:39.014380 kernel: Platform MSI: ITS@0x10080000 domain created Jul 12 00:24:39.014396 kernel: PCI/MSI: ITS@0x10080000 domain created Jul 12 00:24:39.014411 kernel: Remapping and enabling EFI services. Jul 12 00:24:39.014427 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:24:39.014443 kernel: Detected PIPT I-cache on CPU1 Jul 12 00:24:39.014464 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 12 00:24:39.014480 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Jul 12 00:24:39.014497 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 12 00:24:39.014513 kernel: smp: Brought up 1 node, 2 CPUs Jul 12 00:24:39.014528 kernel: SMP: Total of 2 processors activated. Jul 12 00:24:39.014544 kernel: CPU features: detected: 32-bit EL0 Support Jul 12 00:24:39.014560 kernel: CPU features: detected: 32-bit EL1 Support Jul 12 00:24:39.014575 kernel: CPU features: detected: CRC32 instructions Jul 12 00:24:39.014591 kernel: CPU: All CPU(s) started at EL1 Jul 12 00:24:39.014611 kernel: alternatives: patching kernel code Jul 12 00:24:39.014627 kernel: devtmpfs: initialized Jul 12 00:24:39.014653 kernel: KASLR disabled due to lack of seed Jul 12 00:24:39.014673 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:24:39.014690 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 12 00:24:39.014727 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:24:39.014743 kernel: SMBIOS 3.0.0 present. Jul 12 00:24:39.014760 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 12 00:24:39.014776 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:24:39.014792 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 12 00:24:39.014809 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 12 00:24:39.014831 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 12 00:24:39.014848 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:24:39.014864 kernel: audit: type=2000 audit(0.294:1): state=initialized audit_enabled=0 res=1 Jul 12 00:24:39.014880 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:24:39.014897 kernel: cpuidle: using governor menu Jul 12 00:24:39.014917 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 12 00:24:39.014934 kernel: ASID allocator initialised with 32768 entries Jul 12 00:24:39.014950 kernel: ACPI: bus type PCI registered Jul 12 00:24:39.014967 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:24:39.014983 kernel: Serial: AMBA PL011 UART driver Jul 12 00:24:39.015000 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:24:39.015017 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 12 00:24:39.015033 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:24:39.015050 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 12 00:24:39.015070 kernel: cryptd: max_cpu_qlen set to 1000 Jul 12 00:24:39.015087 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 12 00:24:39.015103 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:24:39.015120 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:24:39.015136 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:24:39.015167 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 12 00:24:39.015184 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 12 00:24:39.015200 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 12 00:24:39.015217 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:24:39.015257 kernel: ACPI: Interpreter enabled Jul 12 00:24:39.015280 kernel: ACPI: Using GIC for interrupt routing Jul 12 00:24:39.015296 kernel: ACPI: MCFG table detected, 1 entries Jul 12 00:24:39.015313 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 12 00:24:39.015599 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 12 00:24:39.015800 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 12 00:24:39.020539 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 12 00:24:39.020743 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 12 00:24:39.020945 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 12 00:24:39.020969 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 12 00:24:39.020986 kernel: acpiphp: Slot [1] registered Jul 12 00:24:39.021003 kernel: acpiphp: Slot [2] registered Jul 12 00:24:39.021020 kernel: acpiphp: Slot [3] registered Jul 12 00:24:39.021036 kernel: acpiphp: Slot [4] registered Jul 12 00:24:39.021053 kernel: acpiphp: Slot [5] registered Jul 12 00:24:39.021069 kernel: acpiphp: Slot [6] registered Jul 12 00:24:39.021085 kernel: acpiphp: Slot [7] registered Jul 12 00:24:39.021107 kernel: acpiphp: Slot [8] registered Jul 12 00:24:39.021123 kernel: acpiphp: Slot [9] registered Jul 12 00:24:39.021139 kernel: acpiphp: Slot [10] registered Jul 12 00:24:39.021156 kernel: acpiphp: Slot [11] registered Jul 12 00:24:39.021172 kernel: acpiphp: Slot [12] registered Jul 12 00:24:39.021188 kernel: acpiphp: Slot [13] registered Jul 12 00:24:39.021204 kernel: acpiphp: Slot [14] registered Jul 12 00:24:39.021251 kernel: acpiphp: Slot [15] registered Jul 12 00:24:39.021273 kernel: acpiphp: Slot [16] registered Jul 12 00:24:39.021295 kernel: acpiphp: Slot [17] registered Jul 12 00:24:39.021312 kernel: acpiphp: Slot [18] registered Jul 12 00:24:39.021328 kernel: acpiphp: Slot [19] registered Jul 12 00:24:39.021344 kernel: acpiphp: Slot [20] registered Jul 12 00:24:39.021360 kernel: acpiphp: Slot [21] registered Jul 12 00:24:39.021376 kernel: acpiphp: Slot [22] registered Jul 12 00:24:39.021393 kernel: acpiphp: Slot [23] registered Jul 12 00:24:39.021409 kernel: acpiphp: Slot [24] registered Jul 12 00:24:39.021425 kernel: acpiphp: Slot [25] registered Jul 12 00:24:39.021442 kernel: acpiphp: Slot [26] registered Jul 12 00:24:39.021462 kernel: acpiphp: Slot [27] registered Jul 12 00:24:39.021478 kernel: acpiphp: Slot [28] registered Jul 12 00:24:39.021494 kernel: acpiphp: Slot [29] registered Jul 12 00:24:39.021511 kernel: acpiphp: Slot [30] registered Jul 12 00:24:39.021527 kernel: acpiphp: Slot [31] registered Jul 12 00:24:39.021543 kernel: PCI host bridge to bus 0000:00 Jul 12 00:24:39.023553 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 12 00:24:39.023761 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 12 00:24:39.023940 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 12 00:24:39.024151 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 12 00:24:39.024424 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jul 12 00:24:39.024642 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jul 12 00:24:39.024846 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jul 12 00:24:39.025064 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 12 00:24:39.025326 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jul 12 00:24:39.025531 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 12 00:24:39.025736 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 12 00:24:39.025927 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jul 12 00:24:39.026139 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jul 12 00:24:39.026361 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jul 12 00:24:39.026558 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 12 00:24:39.026798 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jul 12 00:24:39.026996 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jul 12 00:24:39.027190 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jul 12 00:24:39.027419 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jul 12 00:24:39.027620 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jul 12 00:24:39.027794 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 12 00:24:39.027971 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 12 00:24:39.028151 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 12 00:24:39.028175 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 12 00:24:39.028192 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 12 00:24:39.028210 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 12 00:24:39.037925 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 12 00:24:39.037956 kernel: iommu: Default domain type: Translated Jul 12 00:24:39.037975 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 12 00:24:39.037992 kernel: vgaarb: loaded Jul 12 00:24:39.038009 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 12 00:24:39.038035 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 12 00:24:39.038052 kernel: PTP clock support registered Jul 12 00:24:39.038069 kernel: Registered efivars operations Jul 12 00:24:39.038085 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 12 00:24:39.038101 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:24:39.038119 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:24:39.038135 kernel: pnp: PnP ACPI init Jul 12 00:24:39.038415 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 12 00:24:39.038450 kernel: pnp: PnP ACPI: found 1 devices Jul 12 00:24:39.038468 kernel: NET: Registered PF_INET protocol family Jul 12 00:24:39.038485 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:24:39.038502 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:24:39.038519 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:24:39.038535 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:24:39.038552 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 12 00:24:39.038569 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:24:39.038586 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:24:39.038607 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:24:39.038624 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:24:39.038640 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:24:39.038657 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jul 12 00:24:39.038673 kernel: kvm [1]: HYP mode not available Jul 12 00:24:39.038690 kernel: Initialise system trusted keyrings Jul 12 00:24:39.038731 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:24:39.038748 kernel: Key type asymmetric registered Jul 12 00:24:39.038764 kernel: Asymmetric key parser 'x509' registered Jul 12 00:24:39.038785 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 12 00:24:39.038802 kernel: io scheduler mq-deadline registered Jul 12 00:24:39.038819 kernel: io scheduler kyber registered Jul 12 00:24:39.038835 kernel: io scheduler bfq registered Jul 12 00:24:39.039052 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 12 00:24:39.039079 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 12 00:24:39.039097 kernel: ACPI: button: Power Button [PWRB] Jul 12 00:24:39.039113 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 12 00:24:39.039130 kernel: ACPI: button: Sleep Button [SLPB] Jul 12 00:24:39.039153 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:24:39.039171 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 12 00:24:39.043500 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 12 00:24:39.043543 kernel: printk: console [ttyS0] disabled Jul 12 00:24:39.043561 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 12 00:24:39.043578 kernel: printk: console [ttyS0] enabled Jul 12 00:24:39.043596 kernel: printk: bootconsole [uart0] disabled Jul 12 00:24:39.043612 kernel: thunder_xcv, ver 1.0 Jul 12 00:24:39.043630 kernel: thunder_bgx, ver 1.0 Jul 12 00:24:39.043655 kernel: nicpf, ver 1.0 Jul 12 00:24:39.043672 kernel: nicvf, ver 1.0 Jul 12 00:24:39.043914 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 12 00:24:39.044107 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-12T00:24:38 UTC (1752279878) Jul 12 00:24:39.044131 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 12 00:24:39.044149 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:24:39.044166 kernel: Segment Routing with IPv6 Jul 12 00:24:39.044182 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:24:39.044205 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:24:39.044280 kernel: Key type dns_resolver registered Jul 12 00:24:39.044298 kernel: registered taskstats version 1 Jul 12 00:24:39.044315 kernel: Loading compiled-in X.509 certificates Jul 12 00:24:39.044345 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.186-flatcar: de2ee1d04443f96c763927c453375bbe23b5752a' Jul 12 00:24:39.044365 kernel: Key type .fscrypt registered Jul 12 00:24:39.044381 kernel: Key type fscrypt-provisioning registered Jul 12 00:24:39.044397 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:24:39.044413 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:24:39.044436 kernel: ima: No architecture policies found Jul 12 00:24:39.044452 kernel: clk: Disabling unused clocks Jul 12 00:24:39.044469 kernel: Freeing unused kernel memory: 36416K Jul 12 00:24:39.044485 kernel: Run /init as init process Jul 12 00:24:39.044501 kernel: with arguments: Jul 12 00:24:39.044518 kernel: /init Jul 12 00:24:39.044534 kernel: with environment: Jul 12 00:24:39.044550 kernel: HOME=/ Jul 12 00:24:39.044566 kernel: TERM=linux Jul 12 00:24:39.044587 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:24:39.044609 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 12 00:24:39.044631 systemd[1]: Detected virtualization amazon. Jul 12 00:24:39.044649 systemd[1]: Detected architecture arm64. Jul 12 00:24:39.044667 systemd[1]: Running in initrd. Jul 12 00:24:39.044686 systemd[1]: No hostname configured, using default hostname. Jul 12 00:24:39.044703 systemd[1]: Hostname set to . Jul 12 00:24:39.044726 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:24:39.044744 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:24:39.044762 systemd[1]: Started systemd-ask-password-console.path. Jul 12 00:24:39.044780 systemd[1]: Reached target cryptsetup.target. Jul 12 00:24:39.044798 systemd[1]: Reached target paths.target. Jul 12 00:24:39.044816 systemd[1]: Reached target slices.target. Jul 12 00:24:39.044834 systemd[1]: Reached target swap.target. Jul 12 00:24:39.044851 systemd[1]: Reached target timers.target. Jul 12 00:24:39.044874 systemd[1]: Listening on iscsid.socket. Jul 12 00:24:39.044893 systemd[1]: Listening on iscsiuio.socket. Jul 12 00:24:39.044911 systemd[1]: Listening on systemd-journald-audit.socket. Jul 12 00:24:39.044945 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 12 00:24:39.044965 systemd[1]: Listening on systemd-journald.socket. Jul 12 00:24:39.044983 systemd[1]: Listening on systemd-networkd.socket. Jul 12 00:24:39.045001 systemd[1]: Listening on systemd-udevd-control.socket. Jul 12 00:24:39.045020 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 12 00:24:39.045043 systemd[1]: Reached target sockets.target. Jul 12 00:24:39.045061 systemd[1]: Starting kmod-static-nodes.service... Jul 12 00:24:39.045079 systemd[1]: Finished network-cleanup.service. Jul 12 00:24:39.045097 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:24:39.045115 systemd[1]: Starting systemd-journald.service... Jul 12 00:24:39.045133 systemd[1]: Starting systemd-modules-load.service... Jul 12 00:24:39.045151 systemd[1]: Starting systemd-resolved.service... Jul 12 00:24:39.045169 systemd[1]: Starting systemd-vconsole-setup.service... Jul 12 00:24:39.045188 systemd[1]: Finished kmod-static-nodes.service. Jul 12 00:24:39.045210 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:24:39.045278 systemd[1]: Finished systemd-vconsole-setup.service. Jul 12 00:24:39.045303 kernel: audit: type=1130 audit(1752279879.001:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:39.045322 systemd[1]: Starting dracut-cmdline-ask.service... Jul 12 00:24:39.045340 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 12 00:24:39.045362 systemd-journald[310]: Journal started Jul 12 00:24:39.045463 systemd-journald[310]: Runtime Journal (/run/log/journal/ec26403b234c549e9df4128d71eab779) is 8.0M, max 75.4M, 67.4M free. Jul 12 00:24:39.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:38.982491 systemd-modules-load[311]: Inserted module 'overlay' Jul 12 00:24:39.061263 systemd[1]: Started systemd-journald.service. Jul 12 00:24:39.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:39.069963 systemd-resolved[312]: Positive Trust Anchors: Jul 12 00:24:39.070341 systemd-resolved[312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:24:39.078083 kernel: audit: type=1130 audit(1752279879.063:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:39.070396 systemd-resolved[312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 12 00:24:39.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:39.073214 systemd[1]: Finished dracut-cmdline-ask.service. Jul 12 00:24:39.113372 kernel: audit: type=1130 audit(1752279879.078:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:39.113411 kernel: audit: type=1130 audit(1752279879.088:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:39.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:39.080260 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 12 00:24:39.101327 systemd[1]: Starting dracut-cmdline.service... Jul 12 00:24:39.135262 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:24:39.140206 dracut-cmdline[327]: dracut-dracut-053 Jul 12 00:24:39.144000 systemd-modules-load[311]: Inserted module 'br_netfilter' Jul 12 00:24:39.146045 kernel: Bridge firewalling registered Jul 12 00:24:39.150429 dracut-cmdline[327]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6cb548cec1e3020e9c3dcbc1d7670f4d8bdc2e3c8e062898ccaed7fc9d588f65 Jul 12 00:24:39.186274 kernel: SCSI subsystem initialized Jul 12 00:24:39.219198 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:24:39.219283 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:24:39.225268 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 12 00:24:39.236345 systemd-modules-load[311]: Inserted module 'dm_multipath' Jul 12 00:24:39.239809 systemd[1]: Finished systemd-modules-load.service. Jul 12 00:24:39.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:39.252436 systemd[1]: Starting systemd-sysctl.service... Jul 12 00:24:39.256996 kernel: audit: type=1130 audit(1752279879.240:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:39.281363 systemd[1]: Finished systemd-sysctl.service. Jul 12 00:24:39.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:39.300255 kernel: audit: type=1130 audit(1752279879.283:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:39.308260 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:24:39.329264 kernel: iscsi: registered transport (tcp) Jul 12 00:24:39.357556 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:24:39.357641 kernel: QLogic iSCSI HBA Driver Jul 12 00:24:39.551190 systemd-resolved[312]: Defaulting to hostname 'linux'. Jul 12 00:24:39.553491 kernel: random: crng init done Jul 12 00:24:39.555858 systemd[1]: Started systemd-resolved.service. Jul 12 00:24:39.568755 kernel: audit: type=1130 audit(1752279879.556:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:39.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:39.557855 systemd[1]: Reached target nss-lookup.target. Jul 12 00:24:39.587590 systemd[1]: Finished dracut-cmdline.service. Jul 12 00:24:39.601321 kernel: audit: type=1130 audit(1752279879.588:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:39.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:39.599476 systemd[1]: Starting dracut-pre-udev.service... Jul 12 00:24:39.666270 kernel: raid6: neonx8 gen() 6426 MB/s Jul 12 00:24:39.684255 kernel: raid6: neonx8 xor() 4669 MB/s Jul 12 00:24:39.702256 kernel: raid6: neonx4 gen() 6596 MB/s Jul 12 00:24:39.720256 kernel: raid6: neonx4 xor() 4856 MB/s Jul 12 00:24:39.738255 kernel: raid6: neonx2 gen() 5818 MB/s Jul 12 00:24:39.756269 kernel: raid6: neonx2 xor() 4456 MB/s Jul 12 00:24:39.774256 kernel: raid6: neonx1 gen() 4497 MB/s Jul 12 00:24:39.792256 kernel: raid6: neonx1 xor() 3655 MB/s Jul 12 00:24:39.810255 kernel: raid6: int64x8 gen() 3433 MB/s Jul 12 00:24:39.828267 kernel: raid6: int64x8 xor() 2081 MB/s Jul 12 00:24:39.846259 kernel: raid6: int64x4 gen() 3832 MB/s Jul 12 00:24:39.864256 kernel: raid6: int64x4 xor() 2188 MB/s Jul 12 00:24:39.882256 kernel: raid6: int64x2 gen() 3613 MB/s Jul 12 00:24:39.900255 kernel: raid6: int64x2 xor() 1942 MB/s Jul 12 00:24:39.918256 kernel: raid6: int64x1 gen() 2762 MB/s Jul 12 00:24:39.937811 kernel: raid6: int64x1 xor() 1449 MB/s Jul 12 00:24:39.937857 kernel: raid6: using algorithm neonx4 gen() 6596 MB/s Jul 12 00:24:39.937882 kernel: raid6: .... xor() 4856 MB/s, rmw enabled Jul 12 00:24:39.939681 kernel: raid6: using neon recovery algorithm Jul 12 00:24:39.958268 kernel: xor: measuring software checksum speed Jul 12 00:24:39.960255 kernel: 8regs : 8768 MB/sec Jul 12 00:24:39.962257 kernel: 32regs : 10403 MB/sec Jul 12 00:24:39.965990 kernel: arm64_neon : 8796 MB/sec Jul 12 00:24:39.966023 kernel: xor: using function: 32regs (10403 MB/sec) Jul 12 00:24:40.063288 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 12 00:24:40.081156 systemd[1]: Finished dracut-pre-udev.service. Jul 12 00:24:40.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:40.085000 audit: BPF prog-id=7 op=LOAD Jul 12 00:24:40.093000 audit: BPF prog-id=8 op=LOAD Jul 12 00:24:40.097265 kernel: audit: type=1130 audit(1752279880.081:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:40.094980 systemd[1]: Starting systemd-udevd.service... Jul 12 00:24:40.125796 systemd-udevd[509]: Using default interface naming scheme 'v252'. Jul 12 00:24:40.136858 systemd[1]: Started systemd-udevd.service. Jul 12 00:24:40.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:40.144948 systemd[1]: Starting dracut-pre-trigger.service... Jul 12 00:24:40.175606 dracut-pre-trigger[516]: rd.md=0: removing MD RAID activation Jul 12 00:24:40.240667 systemd[1]: Finished dracut-pre-trigger.service. Jul 12 00:24:40.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:40.244490 systemd[1]: Starting systemd-udev-trigger.service... Jul 12 00:24:40.343598 systemd[1]: Finished systemd-udev-trigger.service. Jul 12 00:24:40.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:40.462173 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 12 00:24:40.462263 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 12 00:24:40.488519 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 12 00:24:40.488772 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 12 00:24:40.488996 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 12 00:24:40.489023 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 12 00:24:40.489298 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:ce:fa:d1:7e:6b Jul 12 00:24:40.489530 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 12 00:24:40.500258 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 12 00:24:40.500318 kernel: GPT:9289727 != 16777215 Jul 12 00:24:40.500342 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 12 00:24:40.502550 kernel: GPT:9289727 != 16777215 Jul 12 00:24:40.503834 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 12 00:24:40.507290 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 12 00:24:40.511157 (udev-worker)[569]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:24:40.580270 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (570) Jul 12 00:24:40.641824 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 12 00:24:40.675927 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 12 00:24:40.680652 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 12 00:24:40.709175 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 12 00:24:40.731910 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 12 00:24:40.737208 systemd[1]: Starting disk-uuid.service... Jul 12 00:24:40.748163 disk-uuid[669]: Primary Header is updated. Jul 12 00:24:40.748163 disk-uuid[669]: Secondary Entries is updated. Jul 12 00:24:40.748163 disk-uuid[669]: Secondary Header is updated. Jul 12 00:24:40.756263 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 12 00:24:40.768266 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 12 00:24:41.779685 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 12 00:24:41.779752 disk-uuid[670]: The operation has completed successfully. Jul 12 00:24:41.957649 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:24:41.957864 systemd[1]: Finished disk-uuid.service. Jul 12 00:24:41.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:41.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:41.976215 systemd[1]: Starting verity-setup.service... Jul 12 00:24:42.022271 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 12 00:24:42.119791 systemd[1]: Found device dev-mapper-usr.device. Jul 12 00:24:42.127823 systemd[1]: Mounting sysusr-usr.mount... Jul 12 00:24:42.138445 systemd[1]: Finished verity-setup.service. Jul 12 00:24:42.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:42.224256 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 12 00:24:42.225261 systemd[1]: Mounted sysusr-usr.mount. Jul 12 00:24:42.228413 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 12 00:24:42.232584 systemd[1]: Starting ignition-setup.service... Jul 12 00:24:42.243847 systemd[1]: Starting parse-ip-for-networkd.service... Jul 12 00:24:42.274953 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:24:42.275025 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 12 00:24:42.277283 kernel: BTRFS info (device nvme0n1p6): has skinny extents Jul 12 00:24:42.310252 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 12 00:24:42.330040 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 12 00:24:42.362938 systemd[1]: Finished ignition-setup.service. Jul 12 00:24:42.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:42.366519 systemd[1]: Starting ignition-fetch-offline.service... Jul 12 00:24:42.403065 systemd[1]: Finished parse-ip-for-networkd.service. Jul 12 00:24:42.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:42.405000 audit: BPF prog-id=9 op=LOAD Jul 12 00:24:42.409119 systemd[1]: Starting systemd-networkd.service... Jul 12 00:24:42.460885 systemd-networkd[1182]: lo: Link UP Jul 12 00:24:42.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:42.460907 systemd-networkd[1182]: lo: Gained carrier Jul 12 00:24:42.461875 systemd-networkd[1182]: Enumeration completed Jul 12 00:24:42.462022 systemd[1]: Started systemd-networkd.service. Jul 12 00:24:42.465018 systemd-networkd[1182]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:24:42.467870 systemd[1]: Reached target network.target. Jul 12 00:24:42.476895 systemd-networkd[1182]: eth0: Link UP Jul 12 00:24:42.476904 systemd-networkd[1182]: eth0: Gained carrier Jul 12 00:24:42.477996 systemd[1]: Starting iscsiuio.service... Jul 12 00:24:42.501791 systemd[1]: Started iscsiuio.service. Jul 12 00:24:42.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:42.511130 systemd[1]: Starting iscsid.service... Jul 12 00:24:42.512511 systemd-networkd[1182]: eth0: DHCPv4 address 172.31.29.120/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 12 00:24:42.525204 iscsid[1187]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 12 00:24:42.525204 iscsid[1187]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 12 00:24:42.525204 iscsid[1187]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 12 00:24:42.525204 iscsid[1187]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 12 00:24:42.525204 iscsid[1187]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 12 00:24:42.547387 iscsid[1187]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 12 00:24:42.554835 systemd[1]: Started iscsid.service. Jul 12 00:24:42.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:42.558956 systemd[1]: Starting dracut-initqueue.service... Jul 12 00:24:42.584680 systemd[1]: Finished dracut-initqueue.service. Jul 12 00:24:42.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:42.588340 systemd[1]: Reached target remote-fs-pre.target. Jul 12 00:24:42.596404 systemd[1]: Reached target remote-cryptsetup.target. Jul 12 00:24:42.600531 systemd[1]: Reached target remote-fs.target. Jul 12 00:24:42.603601 systemd[1]: Starting dracut-pre-mount.service... Jul 12 00:24:42.621277 systemd[1]: Finished dracut-pre-mount.service. Jul 12 00:24:42.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:43.090760 ignition[1164]: Ignition 2.14.0 Jul 12 00:24:43.091266 ignition[1164]: Stage: fetch-offline Jul 12 00:24:43.091858 ignition[1164]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:24:43.092608 ignition[1164]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:24:43.121180 ignition[1164]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:24:43.122638 ignition[1164]: Ignition finished successfully Jul 12 00:24:43.128519 systemd[1]: Finished ignition-fetch-offline.service. Jul 12 00:24:43.141680 kernel: kauditd_printk_skb: 16 callbacks suppressed Jul 12 00:24:43.141722 kernel: audit: type=1130 audit(1752279883.133:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:43.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:43.136117 systemd[1]: Starting ignition-fetch.service... Jul 12 00:24:43.154362 ignition[1206]: Ignition 2.14.0 Jul 12 00:24:43.156152 ignition[1206]: Stage: fetch Jul 12 00:24:43.157776 ignition[1206]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:24:43.157852 ignition[1206]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:24:43.172302 ignition[1206]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:24:43.174828 ignition[1206]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:24:43.186401 ignition[1206]: INFO : PUT result: OK Jul 12 00:24:43.191927 ignition[1206]: DEBUG : parsed url from cmdline: "" Jul 12 00:24:43.191927 ignition[1206]: INFO : no config URL provided Jul 12 00:24:43.191927 ignition[1206]: INFO : reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:24:43.191927 ignition[1206]: INFO : no config at "/usr/lib/ignition/user.ign" Jul 12 00:24:43.191927 ignition[1206]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:24:43.206833 ignition[1206]: INFO : PUT result: OK Jul 12 00:24:43.206833 ignition[1206]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 12 00:24:43.206833 ignition[1206]: INFO : GET result: OK Jul 12 00:24:43.206833 ignition[1206]: DEBUG : parsing config with SHA512: 9348cdb0b25bce6aee2647dd063f72268ef57d39e3e7aa28677fc82d252e2df801218f99890855d051891e0ccf8b51b981e32cad2a2da9c693f3b9d80447cef3 Jul 12 00:24:43.215838 unknown[1206]: fetched base config from "system" Jul 12 00:24:43.215867 unknown[1206]: fetched base config from "system" Jul 12 00:24:43.215898 unknown[1206]: fetched user config from "aws" Jul 12 00:24:43.227130 ignition[1206]: fetch: fetch complete Jul 12 00:24:43.227317 ignition[1206]: fetch: fetch passed Jul 12 00:24:43.227411 ignition[1206]: Ignition finished successfully Jul 12 00:24:43.234353 systemd[1]: Finished ignition-fetch.service. Jul 12 00:24:43.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:43.239068 systemd[1]: Starting ignition-kargs.service... Jul 12 00:24:43.248911 kernel: audit: type=1130 audit(1752279883.236:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:43.262915 ignition[1212]: Ignition 2.14.0 Jul 12 00:24:43.262942 ignition[1212]: Stage: kargs Jul 12 00:24:43.263249 ignition[1212]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:24:43.263304 ignition[1212]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:24:43.278278 ignition[1212]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:24:43.280831 ignition[1212]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:24:43.284264 ignition[1212]: INFO : PUT result: OK Jul 12 00:24:43.289541 ignition[1212]: kargs: kargs passed Jul 12 00:24:43.291202 ignition[1212]: Ignition finished successfully Jul 12 00:24:43.294617 systemd[1]: Finished ignition-kargs.service. Jul 12 00:24:43.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:43.303612 systemd[1]: Starting ignition-disks.service... Jul 12 00:24:43.318834 kernel: audit: type=1130 audit(1752279883.293:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:43.322267 ignition[1218]: Ignition 2.14.0 Jul 12 00:24:43.322297 ignition[1218]: Stage: disks Jul 12 00:24:43.322746 ignition[1218]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:24:43.322833 ignition[1218]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:24:43.337127 ignition[1218]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:24:43.346904 ignition[1218]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:24:43.346904 ignition[1218]: INFO : PUT result: OK Jul 12 00:24:43.357820 ignition[1218]: disks: disks passed Jul 12 00:24:43.357926 ignition[1218]: Ignition finished successfully Jul 12 00:24:43.362250 systemd[1]: Finished ignition-disks.service. Jul 12 00:24:43.376785 kernel: audit: type=1130 audit(1752279883.361:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:43.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:43.372850 systemd[1]: Reached target initrd-root-device.target. Jul 12 00:24:43.374925 systemd[1]: Reached target local-fs-pre.target. Jul 12 00:24:43.376824 systemd[1]: Reached target local-fs.target. Jul 12 00:24:43.378647 systemd[1]: Reached target sysinit.target. Jul 12 00:24:43.382104 systemd[1]: Reached target basic.target. Jul 12 00:24:43.385765 systemd[1]: Starting systemd-fsck-root.service... Jul 12 00:24:43.440827 systemd-fsck[1226]: ROOT: clean, 619/553520 files, 56022/553472 blocks Jul 12 00:24:43.447246 systemd[1]: Finished systemd-fsck-root.service. Jul 12 00:24:43.462376 kernel: audit: type=1130 audit(1752279883.448:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:43.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:43.452345 systemd[1]: Mounting sysroot.mount... Jul 12 00:24:43.487261 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 12 00:24:43.487936 systemd[1]: Mounted sysroot.mount. Jul 12 00:24:43.490993 systemd[1]: Reached target initrd-root-fs.target. Jul 12 00:24:43.504063 systemd[1]: Mounting sysroot-usr.mount... Jul 12 00:24:43.513092 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 12 00:24:43.514121 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:24:43.514176 systemd[1]: Reached target ignition-diskful.target. Jul 12 00:24:43.526112 systemd[1]: Mounted sysroot-usr.mount. Jul 12 00:24:43.532774 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 12 00:24:43.537957 systemd[1]: Starting initrd-setup-root.service... Jul 12 00:24:43.558039 initrd-setup-root[1248]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:24:43.570270 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1243) Jul 12 00:24:43.571323 initrd-setup-root[1256]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:24:43.580050 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:24:43.580095 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 12 00:24:43.582642 kernel: BTRFS info (device nvme0n1p6): has skinny extents Jul 12 00:24:43.587400 initrd-setup-root[1280]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:24:43.596668 initrd-setup-root[1288]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:24:43.606268 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 12 00:24:43.616766 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 12 00:24:43.701701 systemd[1]: Finished initrd-setup-root.service. Jul 12 00:24:43.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:43.706859 systemd[1]: Starting ignition-mount.service... Jul 12 00:24:43.718963 kernel: audit: type=1130 audit(1752279883.704:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:43.719907 systemd[1]: Starting sysroot-boot.service... Jul 12 00:24:43.727863 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Jul 12 00:24:43.728112 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Jul 12 00:24:43.752959 ignition[1308]: INFO : Ignition 2.14.0 Jul 12 00:24:43.752959 ignition[1308]: INFO : Stage: mount Jul 12 00:24:43.752959 ignition[1308]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:24:43.752959 ignition[1308]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:24:43.780635 ignition[1308]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:24:43.783718 ignition[1308]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:24:43.787447 systemd[1]: Finished sysroot-boot.service. Jul 12 00:24:43.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:43.798434 ignition[1308]: INFO : PUT result: OK Jul 12 00:24:43.800101 kernel: audit: type=1130 audit(1752279883.788:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:43.803694 ignition[1308]: INFO : mount: mount passed Jul 12 00:24:43.805500 ignition[1308]: INFO : Ignition finished successfully Jul 12 00:24:43.807979 systemd[1]: Finished ignition-mount.service. Jul 12 00:24:43.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:43.812552 systemd[1]: Starting ignition-files.service... Jul 12 00:24:43.825100 kernel: audit: type=1130 audit(1752279883.809:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:43.830287 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 12 00:24:43.854264 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by mount (1318) Jul 12 00:24:43.860426 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:24:43.860469 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 12 00:24:43.862612 kernel: BTRFS info (device nvme0n1p6): has skinny extents Jul 12 00:24:43.876249 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 12 00:24:43.881333 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 12 00:24:43.901326 ignition[1337]: INFO : Ignition 2.14.0 Jul 12 00:24:43.901326 ignition[1337]: INFO : Stage: files Jul 12 00:24:43.904914 ignition[1337]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:24:43.904914 ignition[1337]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:24:43.922024 ignition[1337]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:24:43.924747 ignition[1337]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:24:43.928509 ignition[1337]: INFO : PUT result: OK Jul 12 00:24:43.933320 ignition[1337]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:24:43.937435 ignition[1337]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:24:43.937435 ignition[1337]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:24:43.961169 ignition[1337]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:24:43.964378 ignition[1337]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:24:43.968458 unknown[1337]: wrote ssh authorized keys file for user: core Jul 12 00:24:43.970995 ignition[1337]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:24:43.974731 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 12 00:24:43.978541 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 12 00:24:43.978541 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 12 00:24:43.978541 ignition[1337]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 12 00:24:44.075317 ignition[1337]: INFO : GET result: OK Jul 12 00:24:44.246490 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 12 00:24:44.246490 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:24:44.254494 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:24:44.254494 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:24:44.254494 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:24:44.270061 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Jul 12 00:24:44.270061 ignition[1337]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Jul 12 00:24:44.280983 systemd-networkd[1182]: eth0: Gained IPv6LL Jul 12 00:24:44.285426 ignition[1337]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2592694892" Jul 12 00:24:44.288577 ignition[1337]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2592694892": device or resource busy Jul 12 00:24:44.288577 ignition[1337]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2592694892", trying btrfs: device or resource busy Jul 12 00:24:44.288577 ignition[1337]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2592694892" Jul 12 00:24:44.288577 ignition[1337]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2592694892" Jul 12 00:24:44.309016 ignition[1337]: INFO : op(3): [started] unmounting "/mnt/oem2592694892" Jul 12 00:24:44.309016 ignition[1337]: INFO : op(3): [finished] unmounting "/mnt/oem2592694892" Jul 12 00:24:44.309016 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Jul 12 00:24:44.321003 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:24:44.321003 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:24:44.321003 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:24:44.321003 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:24:44.321003 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:24:44.321003 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:24:44.321003 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:24:44.321003 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:24:44.321003 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Jul 12 00:24:44.321003 ignition[1337]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Jul 12 00:24:44.365540 ignition[1337]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem288917450" Jul 12 00:24:44.365540 ignition[1337]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem288917450": device or resource busy Jul 12 00:24:44.365540 ignition[1337]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem288917450", trying btrfs: device or resource busy Jul 12 00:24:44.365540 ignition[1337]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem288917450" Jul 12 00:24:44.365540 ignition[1337]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem288917450" Jul 12 00:24:44.365540 ignition[1337]: INFO : op(6): [started] unmounting "/mnt/oem288917450" Jul 12 00:24:44.365540 ignition[1337]: INFO : op(6): [finished] unmounting "/mnt/oem288917450" Jul 12 00:24:44.365540 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Jul 12 00:24:44.365540 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:24:44.365540 ignition[1337]: INFO : GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 12 00:24:44.963941 ignition[1337]: INFO : GET result: OK Jul 12 00:24:45.513348 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:24:45.518383 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Jul 12 00:24:45.518383 ignition[1337]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Jul 12 00:24:45.529871 ignition[1337]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem450219788" Jul 12 00:24:45.529871 ignition[1337]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem450219788": device or resource busy Jul 12 00:24:45.529871 ignition[1337]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem450219788", trying btrfs: device or resource busy Jul 12 00:24:45.529871 ignition[1337]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem450219788" Jul 12 00:24:45.549905 ignition[1337]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem450219788" Jul 12 00:24:45.549905 ignition[1337]: INFO : op(9): [started] unmounting "/mnt/oem450219788" Jul 12 00:24:45.549905 ignition[1337]: INFO : op(9): [finished] unmounting "/mnt/oem450219788" Jul 12 00:24:45.549905 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Jul 12 00:24:45.549905 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Jul 12 00:24:45.549905 ignition[1337]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Jul 12 00:24:45.573521 ignition[1337]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3192828901" Jul 12 00:24:45.573521 ignition[1337]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3192828901": device or resource busy Jul 12 00:24:45.573521 ignition[1337]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3192828901", trying btrfs: device or resource busy Jul 12 00:24:45.573521 ignition[1337]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3192828901" Jul 12 00:24:45.573521 ignition[1337]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3192828901" Jul 12 00:24:45.573521 ignition[1337]: INFO : op(c): [started] unmounting "/mnt/oem3192828901" Jul 12 00:24:45.573521 ignition[1337]: INFO : op(c): [finished] unmounting "/mnt/oem3192828901" Jul 12 00:24:45.573521 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Jul 12 00:24:45.573521 ignition[1337]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Jul 12 00:24:45.573521 ignition[1337]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Jul 12 00:24:45.573521 ignition[1337]: INFO : files: op(11): [started] processing unit "amazon-ssm-agent.service" Jul 12 00:24:45.573521 ignition[1337]: INFO : files: op(11): op(12): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Jul 12 00:24:45.573521 ignition[1337]: INFO : files: op(11): op(12): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Jul 12 00:24:45.573521 ignition[1337]: INFO : files: op(11): [finished] processing unit "amazon-ssm-agent.service" Jul 12 00:24:45.573521 ignition[1337]: INFO : files: op(13): [started] processing unit "nvidia.service" Jul 12 00:24:45.573521 ignition[1337]: INFO : files: op(13): [finished] processing unit "nvidia.service" Jul 12 00:24:45.573521 ignition[1337]: INFO : files: op(14): [started] processing unit "containerd.service" Jul 12 00:24:45.573521 ignition[1337]: INFO : files: op(14): op(15): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 12 00:24:45.573521 ignition[1337]: INFO : files: op(14): op(15): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 12 00:24:45.573521 ignition[1337]: INFO : files: op(14): [finished] processing unit "containerd.service" Jul 12 00:24:45.661864 ignition[1337]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Jul 12 00:24:45.661864 ignition[1337]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:24:45.661864 ignition[1337]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:24:45.661864 ignition[1337]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Jul 12 00:24:45.661864 ignition[1337]: INFO : files: op(18): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 12 00:24:45.661864 ignition[1337]: INFO : files: op(18): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 12 00:24:45.661864 ignition[1337]: INFO : files: op(19): [started] setting preset to enabled for "amazon-ssm-agent.service" Jul 12 00:24:45.661864 ignition[1337]: INFO : files: op(19): [finished] setting preset to enabled for "amazon-ssm-agent.service" Jul 12 00:24:45.661864 ignition[1337]: INFO : files: op(1a): [started] setting preset to enabled for "nvidia.service" Jul 12 00:24:45.661864 ignition[1337]: INFO : files: op(1a): [finished] setting preset to enabled for "nvidia.service" Jul 12 00:24:45.661864 ignition[1337]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-helm.service" Jul 12 00:24:45.661864 ignition[1337]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 00:24:45.577571 systemd[1]: mnt-oem3192828901.mount: Deactivated successfully. Jul 12 00:24:45.702549 ignition[1337]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:24:45.702549 ignition[1337]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:24:45.702549 ignition[1337]: INFO : files: files passed Jul 12 00:24:45.702549 ignition[1337]: INFO : Ignition finished successfully Jul 12 00:24:45.713029 systemd[1]: Finished ignition-files.service. Jul 12 00:24:45.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:45.723639 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 12 00:24:45.738476 kernel: audit: type=1130 audit(1752279885.715:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:45.732665 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 12 00:24:45.734072 systemd[1]: Starting ignition-quench.service... Jul 12 00:24:45.750189 initrd-setup-root-after-ignition[1362]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:24:45.754854 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 12 00:24:45.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:45.759429 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:24:45.772595 kernel: audit: type=1130 audit(1752279885.757:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:45.759642 systemd[1]: Finished ignition-quench.service. Jul 12 00:24:45.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:45.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:45.774351 systemd[1]: Reached target ignition-complete.target. Jul 12 00:24:45.779406 systemd[1]: Starting initrd-parse-etc.service... Jul 12 00:24:45.807884 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:24:45.810250 systemd[1]: Finished initrd-parse-etc.service. Jul 12 00:24:45.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:45.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:45.813913 systemd[1]: Reached target initrd-fs.target. Jul 12 00:24:45.815829 systemd[1]: Reached target initrd.target. Jul 12 00:24:45.817513 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 12 00:24:45.822518 systemd[1]: Starting dracut-pre-pivot.service... Jul 12 00:24:45.851099 systemd[1]: Finished dracut-pre-pivot.service. Jul 12 00:24:45.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:45.856304 systemd[1]: Starting initrd-cleanup.service... Jul 12 00:24:45.877295 systemd[1]: Stopped target nss-lookup.target. Jul 12 00:24:45.881056 systemd[1]: Stopped target remote-cryptsetup.target. Jul 12 00:24:45.885006 systemd[1]: Stopped target timers.target. Jul 12 00:24:45.888403 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:24:45.890694 systemd[1]: Stopped dracut-pre-pivot.service. Jul 12 00:24:45.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:45.894587 systemd[1]: Stopped target initrd.target. Jul 12 00:24:45.898018 systemd[1]: Stopped target basic.target. Jul 12 00:24:45.901414 systemd[1]: Stopped target ignition-complete.target. Jul 12 00:24:45.905305 systemd[1]: Stopped target ignition-diskful.target. Jul 12 00:24:45.909136 systemd[1]: Stopped target initrd-root-device.target. Jul 12 00:24:45.913036 systemd[1]: Stopped target remote-fs.target. Jul 12 00:24:45.916700 systemd[1]: Stopped target remote-fs-pre.target. Jul 12 00:24:45.920456 systemd[1]: Stopped target sysinit.target. Jul 12 00:24:45.923916 systemd[1]: Stopped target local-fs.target. Jul 12 00:24:45.927486 systemd[1]: Stopped target local-fs-pre.target. Jul 12 00:24:45.931201 systemd[1]: Stopped target swap.target. Jul 12 00:24:45.934401 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:24:45.936630 systemd[1]: Stopped dracut-pre-mount.service. Jul 12 00:24:45.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:45.943363 systemd[1]: Stopped target cryptsetup.target. Jul 12 00:24:45.946861 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:24:45.949023 systemd[1]: Stopped dracut-initqueue.service. Jul 12 00:24:45.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:45.952586 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:24:45.955294 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 12 00:24:45.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:45.959495 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:24:45.961701 systemd[1]: Stopped ignition-files.service. Jul 12 00:24:45.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:45.966564 systemd[1]: Stopping ignition-mount.service... Jul 12 00:24:45.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:45.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:45.969739 systemd[1]: Stopping iscsid.service... Jul 12 00:24:45.986091 iscsid[1187]: iscsid shutting down. Jul 12 00:24:45.972870 systemd[1]: Stopping sysroot-boot.service... Jul 12 00:24:45.977014 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:24:45.977575 systemd[1]: Stopped systemd-udev-trigger.service. Jul 12 00:24:45.980533 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:24:45.981118 systemd[1]: Stopped dracut-pre-trigger.service. Jul 12 00:24:46.002031 systemd[1]: iscsid.service: Deactivated successfully. Jul 12 00:24:46.008784 ignition[1375]: INFO : Ignition 2.14.0 Jul 12 00:24:46.008784 ignition[1375]: INFO : Stage: umount Jul 12 00:24:46.008784 ignition[1375]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:24:46.008784 ignition[1375]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 12 00:24:46.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:46.002456 systemd[1]: Stopped iscsid.service. Jul 12 00:24:46.023949 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:24:46.025506 systemd[1]: Finished initrd-cleanup.service. Jul 12 00:24:46.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:46.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:46.033614 systemd[1]: Stopping iscsiuio.service... Jul 12 00:24:46.041644 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 12 00:24:46.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:46.051014 ignition[1375]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:24:46.051014 ignition[1375]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:24:46.041859 systemd[1]: Stopped iscsiuio.service. Jul 12 00:24:46.059427 ignition[1375]: INFO : PUT result: OK Jul 12 00:24:46.064850 ignition[1375]: INFO : umount: umount passed Jul 12 00:24:46.067463 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:24:46.071125 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:24:46.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:46.082751 ignition[1375]: INFO : Ignition finished successfully Jul 12 00:24:46.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:46.071336 systemd[1]: Stopped ignition-mount.service. Jul 12 00:24:46.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:46.075003 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:24:46.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:46.075099 systemd[1]: Stopped ignition-disks.service. Jul 12 00:24:46.084630 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:24:46.084724 systemd[1]: Stopped ignition-kargs.service. Jul 12 00:24:46.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:46.087977 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 12 00:24:46.088057 systemd[1]: Stopped ignition-fetch.service. Jul 12 00:24:46.094012 systemd[1]: Stopped target network.target. Jul 12 00:24:46.100640 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:24:46.101145 systemd[1]: Stopped ignition-fetch-offline.service. Jul 12 00:24:46.106708 systemd[1]: Stopped target paths.target. Jul 12 00:24:46.109439 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:24:46.117670 systemd[1]: Stopped systemd-ask-password-console.path. Jul 12 00:24:46.126294 systemd[1]: Stopped target slices.target. Jul 12 00:24:46.129468 systemd[1]: Stopped target sockets.target. Jul 12 00:24:46.132813 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:24:46.133733 systemd[1]: Closed iscsid.socket. Jul 12 00:24:46.137668 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:24:46.137844 systemd[1]: Closed iscsiuio.socket. Jul 12 00:24:46.142444 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:24:46.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:46.142540 systemd[1]: Stopped ignition-setup.service. Jul 12 00:24:46.145130 systemd[1]: Stopping systemd-networkd.service... Jul 12 00:24:46.151367 systemd[1]: Stopping systemd-resolved.service... Jul 12 00:24:46.153686 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:24:46.153860 systemd[1]: Stopped sysroot-boot.service. Jul 12 00:24:46.158297 systemd-networkd[1182]: eth0: DHCPv6 lease lost Jul 12 00:24:46.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:46.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:46.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:46.166906 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:24:46.167144 systemd[1]: Stopped systemd-networkd.service. Jul 12 00:24:46.169473 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:24:46.169664 systemd[1]: Stopped systemd-resolved.service. Jul 12 00:24:46.183000 audit: BPF prog-id=9 op=UNLOAD Jul 12 00:24:46.183000 audit: BPF prog-id=6 op=UNLOAD Jul 12 00:24:46.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:46.175579 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:24:46.175670 systemd[1]: Closed systemd-networkd.socket. Jul 12 00:24:46.180400 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:24:46.180499 systemd[1]: Stopped initrd-setup-root.service. Jul 12 00:24:46.194139 systemd[1]: Stopping network-cleanup.service... Jul 12 00:24:46.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:46.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:46.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:46.198824 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:24:46.198942 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 12 00:24:46.201021 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:24:46.201101 systemd[1]: Stopped systemd-sysctl.service. Jul 12 00:24:46.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:46.203084 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:24:46.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:46.203165 systemd[1]: Stopped systemd-modules-load.service. Jul 12 00:24:46.206931 systemd[1]: Stopping systemd-udevd.service... Jul 12 00:24:46.224024 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:24:46.224216 systemd[1]: Stopped network-cleanup.service. Jul 12 00:24:46.229792 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:24:46.230075 systemd[1]: Stopped systemd-udevd.service. Jul 12 00:24:46.238924 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:24:46.239010 systemd[1]: Closed systemd-udevd-control.socket. Jul 12 00:24:46.244377 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:24:46.244474 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 12 00:24:46.247919 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:24:46.251530 systemd[1]: Stopped dracut-pre-udev.service. Jul 12 00:24:46.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:46.263297 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:24:46.263403 systemd[1]: Stopped dracut-cmdline.service. Jul 12 00:24:46.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:46.267210 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:24:46.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:46.268734 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 12 00:24:46.275526 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 12 00:24:46.285403 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 00:24:46.285531 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 12 00:24:46.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:46.295424 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:24:46.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:46.295522 systemd[1]: Stopped kmod-static-nodes.service. Jul 12 00:24:46.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:46.297826 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:24:46.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:46.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:46.298016 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 12 00:24:46.302379 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:24:46.302581 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 12 00:24:46.305374 systemd[1]: Reached target initrd-switch-root.target. Jul 12 00:24:46.310943 systemd[1]: Starting initrd-switch-root.service... Jul 12 00:24:46.331871 systemd[1]: Switching root. Jul 12 00:24:46.335000 audit: BPF prog-id=5 op=UNLOAD Jul 12 00:24:46.335000 audit: BPF prog-id=4 op=UNLOAD Jul 12 00:24:46.336000 audit: BPF prog-id=3 op=UNLOAD Jul 12 00:24:46.338000 audit: BPF prog-id=8 op=UNLOAD Jul 12 00:24:46.338000 audit: BPF prog-id=7 op=UNLOAD Jul 12 00:24:46.359395 systemd-journald[310]: Journal stopped Jul 12 00:24:50.938945 systemd-journald[310]: Received SIGTERM from PID 1 (systemd). Jul 12 00:24:50.939067 kernel: SELinux: Class mctp_socket not defined in policy. Jul 12 00:24:50.939110 kernel: SELinux: Class anon_inode not defined in policy. Jul 12 00:24:50.939141 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 12 00:24:50.939172 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 00:24:50.939203 kernel: SELinux: policy capability open_perms=1 Jul 12 00:24:50.947510 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 00:24:50.947554 kernel: SELinux: policy capability always_check_network=0 Jul 12 00:24:50.947584 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 00:24:50.947615 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 00:24:50.947644 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 00:24:50.947673 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 00:24:50.947703 systemd[1]: Successfully loaded SELinux policy in 72.890ms. Jul 12 00:24:50.947752 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.325ms. Jul 12 00:24:50.947794 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 12 00:24:50.947824 systemd[1]: Detected virtualization amazon. Jul 12 00:24:50.947858 systemd[1]: Detected architecture arm64. Jul 12 00:24:50.947889 systemd[1]: Detected first boot. Jul 12 00:24:50.947925 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:24:50.947957 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 12 00:24:50.947997 systemd[1]: Populated /etc with preset unit settings. Jul 12 00:24:50.948030 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:24:50.948064 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:24:50.948101 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:24:50.948144 systemd[1]: Queued start job for default target multi-user.target. Jul 12 00:24:50.948180 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. Jul 12 00:24:50.948210 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 12 00:24:50.948263 systemd[1]: Created slice system-addon\x2drun.slice. Jul 12 00:24:50.948299 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Jul 12 00:24:50.948332 systemd[1]: Created slice system-getty.slice. Jul 12 00:24:50.948364 systemd[1]: Created slice system-modprobe.slice. Jul 12 00:24:50.948398 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 12 00:24:50.948434 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 12 00:24:50.948468 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 12 00:24:50.948498 systemd[1]: Created slice user.slice. Jul 12 00:24:50.948530 systemd[1]: Started systemd-ask-password-console.path. Jul 12 00:24:50.948559 systemd[1]: Started systemd-ask-password-wall.path. Jul 12 00:24:50.948591 systemd[1]: Set up automount boot.automount. Jul 12 00:24:50.948620 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 12 00:24:50.948656 systemd[1]: Reached target integritysetup.target. Jul 12 00:24:50.948688 systemd[1]: Reached target remote-cryptsetup.target. Jul 12 00:24:50.948745 systemd[1]: Reached target remote-fs.target. Jul 12 00:24:50.948777 systemd[1]: Reached target slices.target. Jul 12 00:24:50.948809 systemd[1]: Reached target swap.target. Jul 12 00:24:50.948839 systemd[1]: Reached target torcx.target. Jul 12 00:24:50.948871 systemd[1]: Reached target veritysetup.target. Jul 12 00:24:50.948900 systemd[1]: Listening on systemd-coredump.socket. Jul 12 00:24:50.948931 systemd[1]: Listening on systemd-initctl.socket. Jul 12 00:24:50.948961 kernel: kauditd_printk_skb: 57 callbacks suppressed Jul 12 00:24:50.948997 kernel: audit: type=1400 audit(1752279890.550:87): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 12 00:24:50.949029 systemd[1]: Listening on systemd-journald-audit.socket. Jul 12 00:24:50.949062 kernel: audit: type=1335 audit(1752279890.551:88): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 12 00:24:50.949091 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 12 00:24:50.949120 systemd[1]: Listening on systemd-journald.socket. Jul 12 00:24:50.949148 systemd[1]: Listening on systemd-networkd.socket. Jul 12 00:24:50.949179 systemd[1]: Listening on systemd-udevd-control.socket. Jul 12 00:24:50.949213 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 12 00:24:50.949261 systemd[1]: Listening on systemd-userdbd.socket. Jul 12 00:24:50.949291 systemd[1]: Mounting dev-hugepages.mount... Jul 12 00:24:50.949321 systemd[1]: Mounting dev-mqueue.mount... Jul 12 00:24:50.949353 systemd[1]: Mounting media.mount... Jul 12 00:24:50.949383 systemd[1]: Mounting sys-kernel-debug.mount... Jul 12 00:24:50.949414 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 12 00:24:50.949446 systemd[1]: Mounting tmp.mount... Jul 12 00:24:50.949477 systemd[1]: Starting flatcar-tmpfiles.service... Jul 12 00:24:50.949506 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:24:50.949541 systemd[1]: Starting kmod-static-nodes.service... Jul 12 00:24:50.949589 systemd[1]: Starting modprobe@configfs.service... Jul 12 00:24:50.949621 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:24:50.949662 systemd[1]: Starting modprobe@drm.service... Jul 12 00:24:50.949700 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:24:50.949732 systemd[1]: Starting modprobe@fuse.service... Jul 12 00:24:50.949762 systemd[1]: Starting modprobe@loop.service... Jul 12 00:24:50.949792 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 00:24:50.949827 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 12 00:24:50.949857 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 12 00:24:50.949885 systemd[1]: Starting systemd-journald.service... Jul 12 00:24:50.949914 systemd[1]: Starting systemd-modules-load.service... Jul 12 00:24:50.949943 kernel: fuse: init (API version 7.34) Jul 12 00:24:50.949971 systemd[1]: Starting systemd-network-generator.service... Jul 12 00:24:50.950001 systemd[1]: Starting systemd-remount-fs.service... Jul 12 00:24:50.950045 systemd[1]: Starting systemd-udev-trigger.service... Jul 12 00:24:50.950080 kernel: loop: module loaded Jul 12 00:24:50.950115 systemd[1]: Mounted dev-hugepages.mount. Jul 12 00:24:50.950144 systemd[1]: Mounted dev-mqueue.mount. Jul 12 00:24:50.950173 systemd[1]: Mounted media.mount. Jul 12 00:24:50.950202 systemd[1]: Mounted sys-kernel-debug.mount. Jul 12 00:24:50.958354 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 12 00:24:50.958397 systemd[1]: Mounted tmp.mount. Jul 12 00:24:50.960098 systemd[1]: Finished kmod-static-nodes.service. Jul 12 00:24:50.960136 kernel: audit: type=1130 audit(1752279890.847:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:50.960181 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 00:24:50.960246 systemd[1]: Finished modprobe@configfs.service. Jul 12 00:24:50.960282 kernel: audit: type=1130 audit(1752279890.877:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:50.960314 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:24:50.960344 kernel: audit: type=1131 audit(1752279890.877:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:50.960372 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:24:50.960401 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:24:50.960442 kernel: audit: type=1130 audit(1752279890.902:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:50.960482 systemd[1]: Finished modprobe@drm.service. Jul 12 00:24:50.960520 kernel: audit: type=1131 audit(1752279890.902:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:50.960553 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:24:50.960582 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:24:50.960611 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 00:24:50.960640 kernel: audit: type=1130 audit(1752279890.925:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:50.960668 systemd[1]: Finished modprobe@fuse.service. Jul 12 00:24:50.960697 kernel: audit: type=1131 audit(1752279890.926:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:50.960731 systemd-journald[1527]: Journal started Jul 12 00:24:50.960835 systemd-journald[1527]: Runtime Journal (/run/log/journal/ec26403b234c549e9df4128d71eab779) is 8.0M, max 75.4M, 67.4M free. Jul 12 00:24:50.960899 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:24:50.551000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 12 00:24:50.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:50.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:50.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:50.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:50.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:50.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:50.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:50.980580 systemd[1]: Finished modprobe@loop.service. Jul 12 00:24:50.980649 kernel: audit: type=1130 audit(1752279890.935:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:50.980688 systemd[1]: Started systemd-journald.service. Jul 12 00:24:50.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:50.982255 systemd[1]: Finished systemd-modules-load.service. Jul 12 00:24:50.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:50.935000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 12 00:24:50.935000 audit[1527]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffea1bf750 a2=4000 a3=1 items=0 ppid=1 pid=1527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:24:50.935000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 12 00:24:50.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:50.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:50.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:50.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:50.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:50.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:50.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:50.991650 systemd[1]: Finished systemd-network-generator.service. Jul 12 00:24:50.996676 systemd[1]: Finished systemd-remount-fs.service. Jul 12 00:24:50.999518 systemd[1]: Reached target network-pre.target. Jul 12 00:24:50.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:51.003757 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 12 00:24:51.008652 systemd[1]: Mounting sys-kernel-config.mount... Jul 12 00:24:51.010440 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 00:24:51.021461 systemd[1]: Starting systemd-hwdb-update.service... Jul 12 00:24:51.028699 systemd[1]: Starting systemd-journal-flush.service... Jul 12 00:24:51.034935 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:24:51.038825 systemd[1]: Starting systemd-random-seed.service... Jul 12 00:24:51.042858 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:24:51.048647 systemd[1]: Starting systemd-sysctl.service... Jul 12 00:24:51.059208 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 12 00:24:51.062605 systemd[1]: Mounted sys-kernel-config.mount. Jul 12 00:24:51.081091 systemd-journald[1527]: Time spent on flushing to /var/log/journal/ec26403b234c549e9df4128d71eab779 is 63.785ms for 1072 entries. Jul 12 00:24:51.081091 systemd-journald[1527]: System Journal (/var/log/journal/ec26403b234c549e9df4128d71eab779) is 8.0M, max 195.6M, 187.6M free. Jul 12 00:24:51.165476 systemd-journald[1527]: Received client request to flush runtime journal. Jul 12 00:24:51.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:51.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:51.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:51.088573 systemd[1]: Finished systemd-random-seed.service. Jul 12 00:24:51.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:51.092839 systemd[1]: Reached target first-boot-complete.target. Jul 12 00:24:51.117723 systemd[1]: Finished systemd-sysctl.service. Jul 12 00:24:51.138540 systemd[1]: Finished flatcar-tmpfiles.service. Jul 12 00:24:51.142996 systemd[1]: Starting systemd-sysusers.service... Jul 12 00:24:51.167201 systemd[1]: Finished systemd-journal-flush.service. Jul 12 00:24:51.213888 systemd[1]: Finished systemd-udev-trigger.service. Jul 12 00:24:51.218414 systemd[1]: Starting systemd-udev-settle.service... Jul 12 00:24:51.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:51.247334 systemd[1]: Finished systemd-sysusers.service. Jul 12 00:24:51.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:51.254332 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 12 00:24:51.257825 udevadm[1576]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 12 00:24:51.332907 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 12 00:24:51.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:51.954743 systemd[1]: Finished systemd-hwdb-update.service. Jul 12 00:24:51.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:51.959211 systemd[1]: Starting systemd-udevd.service... Jul 12 00:24:52.000633 systemd-udevd[1582]: Using default interface naming scheme 'v252'. Jul 12 00:24:52.048358 systemd[1]: Started systemd-udevd.service. Jul 12 00:24:52.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:52.053342 systemd[1]: Starting systemd-networkd.service... Jul 12 00:24:52.065110 systemd[1]: Starting systemd-userdbd.service... Jul 12 00:24:52.142756 systemd[1]: Found device dev-ttyS0.device. Jul 12 00:24:52.173515 systemd[1]: Started systemd-userdbd.service. Jul 12 00:24:52.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:52.194134 (udev-worker)[1594]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:24:52.348703 systemd-networkd[1586]: lo: Link UP Jul 12 00:24:52.349409 systemd-networkd[1586]: lo: Gained carrier Jul 12 00:24:52.350683 systemd-networkd[1586]: Enumeration completed Jul 12 00:24:52.351021 systemd-networkd[1586]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:24:52.351022 systemd[1]: Started systemd-networkd.service. Jul 12 00:24:52.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:52.355464 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 12 00:24:52.368042 systemd-networkd[1586]: eth0: Link UP Jul 12 00:24:52.368244 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 12 00:24:52.368645 systemd-networkd[1586]: eth0: Gained carrier Jul 12 00:24:52.377503 systemd-networkd[1586]: eth0: DHCPv4 address 172.31.29.120/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 12 00:24:52.549633 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 12 00:24:52.563167 systemd[1]: Finished systemd-udev-settle.service. Jul 12 00:24:52.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:52.567912 systemd[1]: Starting lvm2-activation-early.service... Jul 12 00:24:52.599495 lvm[1702]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:24:52.633984 systemd[1]: Finished lvm2-activation-early.service. Jul 12 00:24:52.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:52.636299 systemd[1]: Reached target cryptsetup.target. Jul 12 00:24:52.640403 systemd[1]: Starting lvm2-activation.service... Jul 12 00:24:52.651058 lvm[1704]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:24:52.687042 systemd[1]: Finished lvm2-activation.service. Jul 12 00:24:52.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:52.689211 systemd[1]: Reached target local-fs-pre.target. Jul 12 00:24:52.691113 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 00:24:52.691166 systemd[1]: Reached target local-fs.target. Jul 12 00:24:52.692966 systemd[1]: Reached target machines.target. Jul 12 00:24:52.701153 systemd[1]: Starting ldconfig.service... Jul 12 00:24:52.704103 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:24:52.705044 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:24:52.708862 systemd[1]: Starting systemd-boot-update.service... Jul 12 00:24:52.713834 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 12 00:24:52.719445 systemd[1]: Starting systemd-machine-id-commit.service... Jul 12 00:24:52.728857 systemd[1]: Starting systemd-sysext.service... Jul 12 00:24:52.749856 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1707 (bootctl) Jul 12 00:24:52.753312 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 12 00:24:52.763931 systemd[1]: Unmounting usr-share-oem.mount... Jul 12 00:24:52.771623 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 12 00:24:52.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:52.789421 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 12 00:24:52.789974 systemd[1]: Unmounted usr-share-oem.mount. Jul 12 00:24:52.816251 kernel: loop0: detected capacity change from 0 to 203944 Jul 12 00:24:52.893754 systemd-fsck[1719]: fsck.fat 4.2 (2021-01-31) Jul 12 00:24:52.893754 systemd-fsck[1719]: /dev/nvme0n1p1: 236 files, 117310/258078 clusters Jul 12 00:24:52.900391 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 12 00:24:52.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:52.908212 systemd[1]: Mounting boot.mount... Jul 12 00:24:52.932751 systemd[1]: Mounted boot.mount. Jul 12 00:24:52.941795 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 00:24:52.944549 systemd[1]: Finished systemd-machine-id-commit.service. Jul 12 00:24:52.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:52.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:52.968056 systemd[1]: Finished systemd-boot-update.service. Jul 12 00:24:53.036270 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 00:24:53.062373 kernel: loop1: detected capacity change from 0 to 203944 Jul 12 00:24:53.084028 (sd-sysext)[1741]: Using extensions 'kubernetes'. Jul 12 00:24:53.085768 (sd-sysext)[1741]: Merged extensions into '/usr'. Jul 12 00:24:53.130096 systemd[1]: Mounting usr-share-oem.mount... Jul 12 00:24:53.132172 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:24:53.134866 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:24:53.139122 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:24:53.145942 systemd[1]: Starting modprobe@loop.service... Jul 12 00:24:53.147953 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:24:53.148288 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:24:53.150281 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:24:53.150673 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:24:53.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:53.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:53.168494 systemd[1]: Mounted usr-share-oem.mount. Jul 12 00:24:53.171439 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:24:53.171807 systemd[1]: Finished modprobe@loop.service. Jul 12 00:24:53.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:53.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:53.179375 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:24:53.181706 systemd[1]: Finished systemd-sysext.service. Jul 12 00:24:53.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:53.184357 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:24:53.185238 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:24:53.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:53.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:53.196064 systemd[1]: Starting ensure-sysext.service... Jul 12 00:24:53.199521 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:24:53.208449 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 12 00:24:53.222339 systemd[1]: Reloading. Jul 12 00:24:53.238179 systemd-tmpfiles[1755]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 12 00:24:53.241887 systemd-tmpfiles[1755]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 00:24:53.247039 systemd-tmpfiles[1755]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 00:24:53.423570 /usr/lib/systemd/system-generators/torcx-generator[1777]: time="2025-07-12T00:24:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 12 00:24:53.423625 /usr/lib/systemd/system-generators/torcx-generator[1777]: time="2025-07-12T00:24:53Z" level=info msg="torcx already run" Jul 12 00:24:53.505492 ldconfig[1706]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 00:24:53.658627 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:24:53.658868 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:24:53.701944 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:24:53.845359 systemd[1]: Finished ldconfig.service. Jul 12 00:24:53.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:53.850274 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 12 00:24:53.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:53.860575 systemd[1]: Starting audit-rules.service... Jul 12 00:24:53.865404 systemd[1]: Starting clean-ca-certificates.service... Jul 12 00:24:53.870547 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 12 00:24:53.878893 systemd[1]: Starting systemd-resolved.service... Jul 12 00:24:53.886574 systemd[1]: Starting systemd-timesyncd.service... Jul 12 00:24:53.895521 systemd[1]: Starting systemd-update-utmp.service... Jul 12 00:24:53.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:53.903663 systemd[1]: Finished clean-ca-certificates.service. Jul 12 00:24:53.919329 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:24:53.922995 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:24:53.927478 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:24:53.935042 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:24:53.938000 audit[1849]: SYSTEM_BOOT pid=1849 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 12 00:24:53.942584 systemd[1]: Starting modprobe@loop.service... Jul 12 00:24:53.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:53.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:53.944715 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:24:53.945051 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:24:53.945508 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:24:53.947599 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:24:53.948038 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:24:53.964763 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:24:53.967648 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:24:53.969542 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:24:53.969881 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:24:53.970189 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:24:53.972018 systemd[1]: Finished systemd-update-utmp.service. Jul 12 00:24:53.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:53.986098 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:24:53.991909 systemd[1]: Starting modprobe@drm.service... Jul 12 00:24:53.994465 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:24:53.994820 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:24:53.995180 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:24:53.999506 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 12 00:24:54.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:54.004772 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:24:54.005160 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:24:54.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:54.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:54.009654 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:24:54.010046 systemd[1]: Finished modprobe@loop.service. Jul 12 00:24:54.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:54.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:54.019532 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:24:54.024299 systemd[1]: Starting systemd-update-done.service... Jul 12 00:24:54.028095 systemd[1]: Finished ensure-sysext.service. Jul 12 00:24:54.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:54.037684 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:24:54.038102 systemd[1]: Finished modprobe@drm.service. Jul 12 00:24:54.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:54.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:54.045123 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:24:54.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:54.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:54.045594 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:24:54.047853 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:24:54.069868 systemd[1]: Finished systemd-update-done.service. Jul 12 00:24:54.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:24:54.132000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 12 00:24:54.132000 audit[1877]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc2664570 a2=420 a3=0 items=0 ppid=1839 pid=1877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:24:54.132000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 12 00:24:54.134211 augenrules[1877]: No rules Jul 12 00:24:54.135904 systemd[1]: Finished audit-rules.service. Jul 12 00:24:54.184839 systemd[1]: Started systemd-timesyncd.service. Jul 12 00:24:54.186937 systemd[1]: Reached target time-set.target. Jul 12 00:24:54.193495 systemd-resolved[1842]: Positive Trust Anchors: Jul 12 00:24:54.194003 systemd-resolved[1842]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:24:54.194152 systemd-resolved[1842]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 12 00:24:54.197413 systemd-networkd[1586]: eth0: Gained IPv6LL Jul 12 00:24:54.200645 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 12 00:24:54.233744 systemd-resolved[1842]: Defaulting to hostname 'linux'. Jul 12 00:24:54.237035 systemd[1]: Started systemd-resolved.service. Jul 12 00:24:54.239145 systemd[1]: Reached target network.target. Jul 12 00:24:54.240986 systemd[1]: Reached target network-online.target. Jul 12 00:24:54.242955 systemd[1]: Reached target nss-lookup.target. Jul 12 00:24:54.244910 systemd[1]: Reached target sysinit.target. Jul 12 00:24:54.246886 systemd[1]: Started motdgen.path. Jul 12 00:24:54.248564 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 12 00:24:54.251265 systemd[1]: Started logrotate.timer. Jul 12 00:24:54.253059 systemd[1]: Started mdadm.timer. Jul 12 00:24:54.254659 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 12 00:24:54.256612 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 00:24:54.256678 systemd[1]: Reached target paths.target. Jul 12 00:24:54.258403 systemd[1]: Reached target timers.target. Jul 12 00:24:54.260617 systemd[1]: Listening on dbus.socket. Jul 12 00:24:54.264692 systemd[1]: Starting docker.socket... Jul 12 00:24:54.269123 systemd[1]: Listening on sshd.socket. Jul 12 00:24:54.270986 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:24:54.271594 systemd[1]: Listening on docker.socket. Jul 12 00:24:54.273366 systemd[1]: Reached target sockets.target. Jul 12 00:24:54.275243 systemd[1]: Reached target basic.target. Jul 12 00:24:54.277267 systemd[1]: System is tainted: cgroupsv1 Jul 12 00:24:54.277348 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 12 00:24:54.277402 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 12 00:24:54.280441 systemd[1]: Started amazon-ssm-agent.service. Jul 12 00:24:54.288115 systemd[1]: Starting containerd.service... Jul 12 00:24:54.294057 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Jul 12 00:24:54.301498 systemd[1]: Starting dbus.service... Jul 12 00:24:54.308120 systemd[1]: Starting enable-oem-cloudinit.service... Jul 12 00:24:54.326251 systemd[1]: Starting extend-filesystems.service... Jul 12 00:24:54.333428 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 12 00:24:54.336528 systemd[1]: Starting kubelet.service... Jul 12 00:24:54.351806 systemd[1]: Starting motdgen.service... Jul 12 00:24:54.358300 systemd[1]: Started nvidia.service. Jul 12 00:24:54.365867 systemd[1]: Starting prepare-helm.service... Jul 12 00:24:54.374241 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 12 00:24:54.384140 systemd[1]: Starting sshd-keygen.service... Jul 12 00:24:54.476898 jq[1892]: false Jul 12 00:24:54.394633 systemd[1]: Starting systemd-logind.service... Jul 12 00:24:54.396456 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:24:54.396627 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 00:24:54.399562 systemd[1]: Starting update-engine.service... Jul 12 00:24:54.412545 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 12 00:24:54.417520 systemd-timesyncd[1844]: Contacted time server 142.202.190.19:123 (0.flatcar.pool.ntp.org). Jul 12 00:24:54.417636 systemd-timesyncd[1844]: Initial clock synchronization to Sat 2025-07-12 00:24:54.349801 UTC. Jul 12 00:24:54.477176 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 00:24:54.477740 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 12 00:24:54.504887 jq[1907]: true Jul 12 00:24:54.500606 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 00:24:54.501128 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 12 00:24:54.541450 tar[1911]: linux-arm64/helm Jul 12 00:24:54.581507 jq[1926]: true Jul 12 00:24:54.603410 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 00:24:54.603977 systemd[1]: Finished motdgen.service. Jul 12 00:24:54.671190 extend-filesystems[1893]: Found loop1 Jul 12 00:24:54.676883 extend-filesystems[1893]: Found nvme0n1 Jul 12 00:24:54.678716 extend-filesystems[1893]: Found nvme0n1p1 Jul 12 00:24:54.686504 extend-filesystems[1893]: Found nvme0n1p2 Jul 12 00:24:54.689024 extend-filesystems[1893]: Found nvme0n1p3 Jul 12 00:24:54.690904 extend-filesystems[1893]: Found usr Jul 12 00:24:54.692620 extend-filesystems[1893]: Found nvme0n1p4 Jul 12 00:24:54.699015 extend-filesystems[1893]: Found nvme0n1p6 Jul 12 00:24:54.701206 extend-filesystems[1893]: Found nvme0n1p7 Jul 12 00:24:54.703673 extend-filesystems[1893]: Found nvme0n1p9 Jul 12 00:24:54.705498 extend-filesystems[1893]: Checking size of /dev/nvme0n1p9 Jul 12 00:24:54.737912 dbus-daemon[1891]: [system] SELinux support is enabled Jul 12 00:24:54.738821 systemd[1]: Started dbus.service. Jul 12 00:24:54.743974 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 00:24:54.744036 systemd[1]: Reached target system-config.target. Jul 12 00:24:54.747450 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 00:24:54.747503 systemd[1]: Reached target user-config.target. Jul 12 00:24:54.754569 dbus-daemon[1891]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1586 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 12 00:24:54.765770 dbus-daemon[1891]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 12 00:24:54.790534 systemd[1]: Starting systemd-hostnamed.service... Jul 12 00:24:54.839565 extend-filesystems[1893]: Resized partition /dev/nvme0n1p9 Jul 12 00:24:54.865621 extend-filesystems[1964]: resize2fs 1.46.5 (30-Dec-2021) Jul 12 00:24:54.874016 bash[1958]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:24:54.875514 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 12 00:24:54.938269 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 12 00:24:54.949212 update_engine[1906]: I0712 00:24:54.939826 1906 main.cc:92] Flatcar Update Engine starting Jul 12 00:24:54.969453 systemd[1]: Started update-engine.service. Jul 12 00:24:54.974620 systemd[1]: Started locksmithd.service. Jul 12 00:24:54.979164 update_engine[1906]: I0712 00:24:54.979118 1906 update_check_scheduler.cc:74] Next update check in 3m38s Jul 12 00:24:55.015258 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 12 00:24:55.031320 amazon-ssm-agent[1887]: 2025/07/12 00:24:55 Failed to load instance info from vault. RegistrationKey does not exist. Jul 12 00:24:55.036956 extend-filesystems[1964]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 12 00:24:55.036956 extend-filesystems[1964]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 12 00:24:55.036956 extend-filesystems[1964]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 12 00:24:55.053431 extend-filesystems[1893]: Resized filesystem in /dev/nvme0n1p9 Jul 12 00:24:55.038388 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 00:24:55.038915 systemd[1]: Finished extend-filesystems.service. Jul 12 00:24:55.054068 systemd[1]: nvidia.service: Deactivated successfully. Jul 12 00:24:55.100904 amazon-ssm-agent[1887]: Initializing new seelog logger Jul 12 00:24:55.101127 amazon-ssm-agent[1887]: New Seelog Logger Creation Complete Jul 12 00:24:55.101366 amazon-ssm-agent[1887]: 2025/07/12 00:24:55 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:24:55.101366 amazon-ssm-agent[1887]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:24:55.101841 amazon-ssm-agent[1887]: 2025/07/12 00:24:55 processing appconfig overrides Jul 12 00:24:55.194064 env[1913]: time="2025-07-12T00:24:55.192844218Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 12 00:24:55.220756 systemd-logind[1905]: Watching system buttons on /dev/input/event0 (Power Button) Jul 12 00:24:55.223914 systemd-logind[1905]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 12 00:24:55.229397 systemd-logind[1905]: New seat seat0. Jul 12 00:24:55.239391 systemd[1]: Started systemd-logind.service. Jul 12 00:24:55.374482 env[1913]: time="2025-07-12T00:24:55.374354289Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 12 00:24:55.376559 dbus-daemon[1891]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 12 00:24:55.376854 systemd[1]: Started systemd-hostnamed.service. Jul 12 00:24:55.380095 env[1913]: time="2025-07-12T00:24:55.380047367Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:24:55.381318 dbus-daemon[1891]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1960 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 12 00:24:55.386408 systemd[1]: Starting polkit.service... Jul 12 00:24:55.397657 env[1913]: time="2025-07-12T00:24:55.397559846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.186-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:24:55.397657 env[1913]: time="2025-07-12T00:24:55.397635629Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:24:55.398231 env[1913]: time="2025-07-12T00:24:55.398147535Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:24:55.398231 env[1913]: time="2025-07-12T00:24:55.398202530Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 12 00:24:55.398387 env[1913]: time="2025-07-12T00:24:55.398253969Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 12 00:24:55.398387 env[1913]: time="2025-07-12T00:24:55.398280729Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 12 00:24:55.398500 env[1913]: time="2025-07-12T00:24:55.398457960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:24:55.399418 env[1913]: time="2025-07-12T00:24:55.399367806Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:24:55.404402 env[1913]: time="2025-07-12T00:24:55.404313575Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:24:55.404402 env[1913]: time="2025-07-12T00:24:55.404390274Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 12 00:24:55.404618 env[1913]: time="2025-07-12T00:24:55.404552536Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 12 00:24:55.404618 env[1913]: time="2025-07-12T00:24:55.404585674Z" level=info msg="metadata content store policy set" policy=shared Jul 12 00:24:55.415777 env[1913]: time="2025-07-12T00:24:55.415707715Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 12 00:24:55.415939 env[1913]: time="2025-07-12T00:24:55.415787401Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 12 00:24:55.415939 env[1913]: time="2025-07-12T00:24:55.415821812Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 12 00:24:55.415939 env[1913]: time="2025-07-12T00:24:55.415909517Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 12 00:24:55.416252 env[1913]: time="2025-07-12T00:24:55.416035620Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 12 00:24:55.416252 env[1913]: time="2025-07-12T00:24:55.416077908Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 12 00:24:55.416252 env[1913]: time="2025-07-12T00:24:55.416109760Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 12 00:24:55.416711 env[1913]: time="2025-07-12T00:24:55.416650545Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 12 00:24:55.416800 env[1913]: time="2025-07-12T00:24:55.416712251Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 12 00:24:55.416800 env[1913]: time="2025-07-12T00:24:55.416757692Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 12 00:24:55.416800 env[1913]: time="2025-07-12T00:24:55.416789319Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 12 00:24:55.416954 env[1913]: time="2025-07-12T00:24:55.416819316Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 12 00:24:55.417098 env[1913]: time="2025-07-12T00:24:55.417047224Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 12 00:24:55.417345 env[1913]: time="2025-07-12T00:24:55.417293622Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 12 00:24:55.420916 env[1913]: time="2025-07-12T00:24:55.420843744Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 12 00:24:55.421074 env[1913]: time="2025-07-12T00:24:55.420936446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 12 00:24:55.421074 env[1913]: time="2025-07-12T00:24:55.420972999Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 12 00:24:55.421249 env[1913]: time="2025-07-12T00:24:55.421186081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 12 00:24:55.421312 env[1913]: time="2025-07-12T00:24:55.421242862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 12 00:24:55.421312 env[1913]: time="2025-07-12T00:24:55.421277309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 12 00:24:55.421439 env[1913]: time="2025-07-12T00:24:55.421319942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 12 00:24:55.421439 env[1913]: time="2025-07-12T00:24:55.421350129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 12 00:24:55.421439 env[1913]: time="2025-07-12T00:24:55.421380232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 12 00:24:55.421439 env[1913]: time="2025-07-12T00:24:55.421410205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 12 00:24:55.421639 env[1913]: time="2025-07-12T00:24:55.421439155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 12 00:24:55.421639 env[1913]: time="2025-07-12T00:24:55.421473970Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 12 00:24:55.421803 env[1913]: time="2025-07-12T00:24:55.421754934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 12 00:24:55.421872 env[1913]: time="2025-07-12T00:24:55.421805694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 12 00:24:55.421872 env[1913]: time="2025-07-12T00:24:55.421842319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 12 00:24:55.421972 env[1913]: time="2025-07-12T00:24:55.421871423Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 12 00:24:55.421972 env[1913]: time="2025-07-12T00:24:55.421903823Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 12 00:24:55.421972 env[1913]: time="2025-07-12T00:24:55.421930214Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 12 00:24:55.422129 env[1913]: time="2025-07-12T00:24:55.421966648Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 12 00:24:55.422129 env[1913]: time="2025-07-12T00:24:55.422030175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 12 00:24:55.422964 env[1913]: time="2025-07-12T00:24:55.422830994Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 12 00:24:55.425062 env[1913]: time="2025-07-12T00:24:55.422960416Z" level=info msg="Connect containerd service" Jul 12 00:24:55.434573 polkitd[2001]: Started polkitd version 121 Jul 12 00:24:55.437630 env[1913]: time="2025-07-12T00:24:55.437537044Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 12 00:24:55.445638 env[1913]: time="2025-07-12T00:24:55.445452122Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:24:55.445860 env[1913]: time="2025-07-12T00:24:55.445793258Z" level=info msg="Start subscribing containerd event" Jul 12 00:24:55.445974 env[1913]: time="2025-07-12T00:24:55.445892160Z" level=info msg="Start recovering state" Jul 12 00:24:55.447789 env[1913]: time="2025-07-12T00:24:55.447590926Z" level=info msg="Start event monitor" Jul 12 00:24:55.447789 env[1913]: time="2025-07-12T00:24:55.447694290Z" level=info msg="Start snapshots syncer" Jul 12 00:24:55.447789 env[1913]: time="2025-07-12T00:24:55.447722645Z" level=info msg="Start cni network conf syncer for default" Jul 12 00:24:55.451095 env[1913]: time="2025-07-12T00:24:55.451019229Z" level=info msg="Start streaming server" Jul 12 00:24:55.457443 env[1913]: time="2025-07-12T00:24:55.457347103Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 00:24:55.457611 env[1913]: time="2025-07-12T00:24:55.457570370Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 00:24:55.457864 systemd[1]: Started containerd.service. Jul 12 00:24:55.465888 env[1913]: time="2025-07-12T00:24:55.465760677Z" level=info msg="containerd successfully booted in 0.347190s" Jul 12 00:24:55.482270 polkitd[2001]: Loading rules from directory /etc/polkit-1/rules.d Jul 12 00:24:55.482395 polkitd[2001]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 12 00:24:55.495279 polkitd[2001]: Finished loading, compiling and executing 2 rules Jul 12 00:24:55.496025 dbus-daemon[1891]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 12 00:24:55.496288 systemd[1]: Started polkit.service. Jul 12 00:24:55.501410 polkitd[2001]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 12 00:24:55.545174 systemd-hostnamed[1960]: Hostname set to (transient) Jul 12 00:24:55.545292 systemd-resolved[1842]: System hostname changed to 'ip-172-31-29-120'. Jul 12 00:24:55.585172 coreos-metadata[1890]: Jul 12 00:24:55.584 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 12 00:24:55.593762 coreos-metadata[1890]: Jul 12 00:24:55.593 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Jul 12 00:24:55.596139 coreos-metadata[1890]: Jul 12 00:24:55.595 INFO Fetch successful Jul 12 00:24:55.596470 coreos-metadata[1890]: Jul 12 00:24:55.596 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 12 00:24:55.607589 coreos-metadata[1890]: Jul 12 00:24:55.607 INFO Fetch successful Jul 12 00:24:55.611424 unknown[1890]: wrote ssh authorized keys file for user: core Jul 12 00:24:55.649103 update-ssh-keys[2028]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:24:55.650627 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Jul 12 00:24:55.877259 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO Create new startup processor Jul 12 00:24:55.882409 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [LongRunningPluginsManager] registered plugins: {} Jul 12 00:24:55.882409 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO Initializing bookkeeping folders Jul 12 00:24:55.882585 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO removing the completed state files Jul 12 00:24:55.882585 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO Initializing bookkeeping folders for long running plugins Jul 12 00:24:55.882585 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Jul 12 00:24:55.882585 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO Initializing healthcheck folders for long running plugins Jul 12 00:24:55.882585 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO Initializing locations for inventory plugin Jul 12 00:24:55.882585 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO Initializing default location for custom inventory Jul 12 00:24:55.882585 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO Initializing default location for file inventory Jul 12 00:24:55.882928 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO Initializing default location for role inventory Jul 12 00:24:55.882928 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO Init the cloudwatchlogs publisher Jul 12 00:24:55.882928 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [instanceID=i-0fee3ccd5e13b4105] Successfully loaded platform independent plugin aws:softwareInventory Jul 12 00:24:55.882928 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [instanceID=i-0fee3ccd5e13b4105] Successfully loaded platform independent plugin aws:runDockerAction Jul 12 00:24:55.882928 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [instanceID=i-0fee3ccd5e13b4105] Successfully loaded platform independent plugin aws:downloadContent Jul 12 00:24:55.882928 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [instanceID=i-0fee3ccd5e13b4105] Successfully loaded platform independent plugin aws:runDocument Jul 12 00:24:55.882928 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [instanceID=i-0fee3ccd5e13b4105] Successfully loaded platform independent plugin aws:runPowerShellScript Jul 12 00:24:55.882928 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [instanceID=i-0fee3ccd5e13b4105] Successfully loaded platform independent plugin aws:updateSsmAgent Jul 12 00:24:55.882928 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [instanceID=i-0fee3ccd5e13b4105] Successfully loaded platform independent plugin aws:configureDocker Jul 12 00:24:55.882928 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [instanceID=i-0fee3ccd5e13b4105] Successfully loaded platform independent plugin aws:refreshAssociation Jul 12 00:24:55.882928 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [instanceID=i-0fee3ccd5e13b4105] Successfully loaded platform independent plugin aws:configurePackage Jul 12 00:24:55.882928 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [instanceID=i-0fee3ccd5e13b4105] Successfully loaded platform dependent plugin aws:runShellScript Jul 12 00:24:55.882928 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Jul 12 00:24:55.882928 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO OS: linux, Arch: arm64 Jul 12 00:24:55.894042 amazon-ssm-agent[1887]: datastore file /var/lib/amazon/ssm/i-0fee3ccd5e13b4105/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Jul 12 00:24:55.982159 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [MessagingDeliveryService] Starting document processing engine... Jul 12 00:24:56.077042 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [MessagingDeliveryService] [EngineProcessor] Starting Jul 12 00:24:56.171368 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Jul 12 00:24:56.265947 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [MessagingDeliveryService] Starting message polling Jul 12 00:24:56.360619 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [MessagingDeliveryService] Starting send replies to MDS Jul 12 00:24:56.455536 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [instanceID=i-0fee3ccd5e13b4105] Starting association polling Jul 12 00:24:56.524760 tar[1911]: linux-arm64/LICENSE Jul 12 00:24:56.525567 tar[1911]: linux-arm64/README.md Jul 12 00:24:56.543857 systemd[1]: Finished prepare-helm.service. Jul 12 00:24:56.550819 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Jul 12 00:24:56.612296 locksmithd[1977]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 00:24:56.646103 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [MessagingDeliveryService] [Association] Launching response handler Jul 12 00:24:56.741614 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Jul 12 00:24:56.837382 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Jul 12 00:24:56.933235 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Jul 12 00:24:57.029334 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [HealthCheck] HealthCheck reporting agent health. Jul 12 00:24:57.125774 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [MessageGatewayService] Starting session document processing engine... Jul 12 00:24:57.222218 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [MessageGatewayService] [EngineProcessor] Starting Jul 12 00:24:57.318890 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Jul 12 00:24:57.415827 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0fee3ccd5e13b4105, requestId: bee79b11-9738-43f9-bbab-65958fd92cd0 Jul 12 00:24:57.512844 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [OfflineService] Starting document processing engine... Jul 12 00:24:57.542376 systemd[1]: Started kubelet.service. Jul 12 00:24:57.610126 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [OfflineService] [EngineProcessor] Starting Jul 12 00:24:57.707637 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [OfflineService] [EngineProcessor] Initial processing Jul 12 00:24:57.805239 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [OfflineService] Starting message polling Jul 12 00:24:57.903144 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [OfflineService] Starting send replies to MDS Jul 12 00:24:58.001275 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [LongRunningPluginsManager] starting long running plugin manager Jul 12 00:24:58.099438 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Jul 12 00:24:58.197892 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [MessageGatewayService] listening reply. Jul 12 00:24:58.296596 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Jul 12 00:24:58.395372 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [StartupProcessor] Executing startup processor tasks Jul 12 00:24:58.494367 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Jul 12 00:24:58.593716 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Jul 12 00:24:58.693076 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.7 Jul 12 00:24:58.792746 amazon-ssm-agent[1887]: 2025-07-12 00:24:55 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0fee3ccd5e13b4105?role=subscribe&stream=input Jul 12 00:24:58.892618 amazon-ssm-agent[1887]: 2025-07-12 00:24:56 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0fee3ccd5e13b4105?role=subscribe&stream=input Jul 12 00:24:58.992572 amazon-ssm-agent[1887]: 2025-07-12 00:24:56 INFO [MessageGatewayService] Starting receiving message from control channel Jul 12 00:24:59.005258 kubelet[2122]: E0712 00:24:59.005176 2122 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:24:59.008771 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:24:59.009177 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:24:59.092798 amazon-ssm-agent[1887]: 2025-07-12 00:24:56 INFO [MessageGatewayService] [EngineProcessor] Initial processing Jul 12 00:25:00.348284 sshd_keygen[1928]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 00:25:00.386463 systemd[1]: Finished sshd-keygen.service. Jul 12 00:25:00.391439 systemd[1]: Starting issuegen.service... Jul 12 00:25:00.405192 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 00:25:00.405766 systemd[1]: Finished issuegen.service. Jul 12 00:25:00.411064 systemd[1]: Starting systemd-user-sessions.service... Jul 12 00:25:00.428183 systemd[1]: Finished systemd-user-sessions.service. Jul 12 00:25:00.434494 systemd[1]: Started getty@tty1.service. Jul 12 00:25:00.440011 systemd[1]: Started serial-getty@ttyS0.service. Jul 12 00:25:00.443248 systemd[1]: Reached target getty.target. Jul 12 00:25:00.445661 systemd[1]: Reached target multi-user.target. Jul 12 00:25:00.451467 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 12 00:25:00.468935 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 12 00:25:00.469699 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 12 00:25:00.474733 systemd[1]: Startup finished in 9.213s (kernel) + 13.862s (userspace) = 23.076s. Jul 12 00:25:03.345002 systemd[1]: Created slice system-sshd.slice. Jul 12 00:25:03.349034 systemd[1]: Started sshd@0-172.31.29.120:22-147.75.109.163:51582.service. Jul 12 00:25:03.541929 sshd[2148]: Accepted publickey for core from 147.75.109.163 port 51582 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:25:03.547962 sshd[2148]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:25:03.569690 systemd[1]: Created slice user-500.slice. Jul 12 00:25:03.571791 systemd[1]: Starting user-runtime-dir@500.service... Jul 12 00:25:03.579070 systemd-logind[1905]: New session 1 of user core. Jul 12 00:25:03.591585 systemd[1]: Finished user-runtime-dir@500.service. Jul 12 00:25:03.594179 systemd[1]: Starting user@500.service... Jul 12 00:25:03.609716 (systemd)[2153]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:25:03.710400 amazon-ssm-agent[1887]: 2025-07-12 00:25:03 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Jul 12 00:25:03.809197 systemd[2153]: Queued start job for default target default.target. Jul 12 00:25:03.809635 systemd[2153]: Reached target paths.target. Jul 12 00:25:03.809673 systemd[2153]: Reached target sockets.target. Jul 12 00:25:03.809705 systemd[2153]: Reached target timers.target. Jul 12 00:25:03.809733 systemd[2153]: Reached target basic.target. Jul 12 00:25:03.809830 systemd[2153]: Reached target default.target. Jul 12 00:25:03.809892 systemd[2153]: Startup finished in 188ms. Jul 12 00:25:03.809938 systemd[1]: Started user@500.service. Jul 12 00:25:03.811951 systemd[1]: Started session-1.scope. Jul 12 00:25:03.956936 systemd[1]: Started sshd@1-172.31.29.120:22-147.75.109.163:51594.service. Jul 12 00:25:04.134082 sshd[2162]: Accepted publickey for core from 147.75.109.163 port 51594 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:25:04.136667 sshd[2162]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:25:04.144832 systemd-logind[1905]: New session 2 of user core. Jul 12 00:25:04.145775 systemd[1]: Started session-2.scope. Jul 12 00:25:04.277613 sshd[2162]: pam_unix(sshd:session): session closed for user core Jul 12 00:25:04.283182 systemd-logind[1905]: Session 2 logged out. Waiting for processes to exit. Jul 12 00:25:04.284352 systemd[1]: sshd@1-172.31.29.120:22-147.75.109.163:51594.service: Deactivated successfully. Jul 12 00:25:04.285861 systemd[1]: session-2.scope: Deactivated successfully. Jul 12 00:25:04.286916 systemd-logind[1905]: Removed session 2. Jul 12 00:25:04.302191 systemd[1]: Started sshd@2-172.31.29.120:22-147.75.109.163:51608.service. Jul 12 00:25:04.471971 sshd[2169]: Accepted publickey for core from 147.75.109.163 port 51608 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:25:04.474495 sshd[2169]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:25:04.482422 systemd-logind[1905]: New session 3 of user core. Jul 12 00:25:04.483381 systemd[1]: Started session-3.scope. Jul 12 00:25:04.605952 sshd[2169]: pam_unix(sshd:session): session closed for user core Jul 12 00:25:04.611111 systemd[1]: sshd@2-172.31.29.120:22-147.75.109.163:51608.service: Deactivated successfully. Jul 12 00:25:04.612812 systemd-logind[1905]: Session 3 logged out. Waiting for processes to exit. Jul 12 00:25:04.612960 systemd[1]: session-3.scope: Deactivated successfully. Jul 12 00:25:04.616144 systemd-logind[1905]: Removed session 3. Jul 12 00:25:04.631500 systemd[1]: Started sshd@3-172.31.29.120:22-147.75.109.163:51622.service. Jul 12 00:25:04.800659 sshd[2176]: Accepted publickey for core from 147.75.109.163 port 51622 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:25:04.803117 sshd[2176]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:25:04.811072 systemd-logind[1905]: New session 4 of user core. Jul 12 00:25:04.811926 systemd[1]: Started session-4.scope. Jul 12 00:25:04.942390 sshd[2176]: pam_unix(sshd:session): session closed for user core Jul 12 00:25:04.947832 systemd[1]: sshd@3-172.31.29.120:22-147.75.109.163:51622.service: Deactivated successfully. Jul 12 00:25:04.950443 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 00:25:04.951581 systemd-logind[1905]: Session 4 logged out. Waiting for processes to exit. Jul 12 00:25:04.954607 systemd-logind[1905]: Removed session 4. Jul 12 00:25:04.968394 systemd[1]: Started sshd@4-172.31.29.120:22-147.75.109.163:51630.service. Jul 12 00:25:05.140400 sshd[2183]: Accepted publickey for core from 147.75.109.163 port 51630 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:25:05.143398 sshd[2183]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:25:05.151843 systemd[1]: Started session-5.scope. Jul 12 00:25:05.152319 systemd-logind[1905]: New session 5 of user core. Jul 12 00:25:05.280202 sudo[2187]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 12 00:25:05.280776 sudo[2187]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 12 00:25:05.296506 dbus-daemon[1891]: avc: received setenforce notice (enforcing=1) Jul 12 00:25:05.299938 sudo[2187]: pam_unix(sudo:session): session closed for user root Jul 12 00:25:05.325829 sshd[2183]: pam_unix(sshd:session): session closed for user core Jul 12 00:25:05.331511 systemd-logind[1905]: Session 5 logged out. Waiting for processes to exit. Jul 12 00:25:05.332130 systemd[1]: sshd@4-172.31.29.120:22-147.75.109.163:51630.service: Deactivated successfully. Jul 12 00:25:05.333703 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 00:25:05.335016 systemd-logind[1905]: Removed session 5. Jul 12 00:25:05.352075 systemd[1]: Started sshd@5-172.31.29.120:22-147.75.109.163:51640.service. Jul 12 00:25:05.527746 sshd[2191]: Accepted publickey for core from 147.75.109.163 port 51640 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:25:05.530785 sshd[2191]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:25:05.539815 systemd[1]: Started session-6.scope. Jul 12 00:25:05.541302 systemd-logind[1905]: New session 6 of user core. Jul 12 00:25:05.650259 sudo[2196]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 12 00:25:05.650788 sudo[2196]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 12 00:25:05.655994 sudo[2196]: pam_unix(sudo:session): session closed for user root Jul 12 00:25:05.665366 sudo[2195]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 12 00:25:05.666432 sudo[2195]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 12 00:25:05.684023 systemd[1]: Stopping audit-rules.service... Jul 12 00:25:05.685000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 12 00:25:05.688311 kernel: kauditd_printk_skb: 58 callbacks suppressed Jul 12 00:25:05.688373 kernel: audit: type=1305 audit(1752279905.685:151): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 12 00:25:05.685000 audit[2199]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc6159c70 a2=420 a3=0 items=0 ppid=1 pid=2199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:05.694056 auditctl[2199]: No rules Jul 12 00:25:05.695093 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 00:25:05.695614 systemd[1]: Stopped audit-rules.service. Jul 12 00:25:05.700873 systemd[1]: Starting audit-rules.service... Jul 12 00:25:05.708959 kernel: audit: type=1300 audit(1752279905.685:151): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc6159c70 a2=420 a3=0 items=0 ppid=1 pid=2199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:05.709063 kernel: audit: type=1327 audit(1752279905.685:151): proctitle=2F7362696E2F617564697463746C002D44 Jul 12 00:25:05.709133 kernel: audit: type=1131 audit(1752279905.693:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:05.685000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jul 12 00:25:05.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:05.741816 augenrules[2217]: No rules Jul 12 00:25:05.743702 systemd[1]: Finished audit-rules.service. Jul 12 00:25:05.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:05.745585 sudo[2195]: pam_unix(sudo:session): session closed for user root Jul 12 00:25:05.743000 audit[2195]: USER_END pid=2195 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 12 00:25:05.762167 kernel: audit: type=1130 audit(1752279905.743:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:05.762323 kernel: audit: type=1106 audit(1752279905.743:154): pid=2195 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 12 00:25:05.743000 audit[2195]: CRED_DISP pid=2195 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 12 00:25:05.770734 kernel: audit: type=1104 audit(1752279905.743:155): pid=2195 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 12 00:25:05.775582 sshd[2191]: pam_unix(sshd:session): session closed for user core Jul 12 00:25:05.777000 audit[2191]: USER_END pid=2191 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:25:05.780901 systemd-logind[1905]: Session 6 logged out. Waiting for processes to exit. Jul 12 00:25:05.782427 systemd[1]: sshd@5-172.31.29.120:22-147.75.109.163:51640.service: Deactivated successfully. Jul 12 00:25:05.783746 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 00:25:05.785502 systemd-logind[1905]: Removed session 6. Jul 12 00:25:05.777000 audit[2191]: CRED_DISP pid=2191 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:25:05.801023 kernel: audit: type=1106 audit(1752279905.777:156): pid=2191 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:25:05.801137 kernel: audit: type=1104 audit(1752279905.777:157): pid=2191 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:25:05.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.29.120:22-147.75.109.163:51640 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:05.810458 kernel: audit: type=1131 audit(1752279905.777:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.29.120:22-147.75.109.163:51640 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:05.808159 systemd[1]: Started sshd@6-172.31.29.120:22-147.75.109.163:51652.service. Jul 12 00:25:05.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.29.120:22-147.75.109.163:51652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:05.975000 audit[2224]: USER_ACCT pid=2224 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:25:05.977354 sshd[2224]: Accepted publickey for core from 147.75.109.163 port 51652 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:25:05.978000 audit[2224]: CRED_ACQ pid=2224 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:25:05.978000 audit[2224]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc90ebc00 a2=3 a3=1 items=0 ppid=1 pid=2224 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:05.978000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:25:05.980650 sshd[2224]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:25:05.988727 systemd-logind[1905]: New session 7 of user core. Jul 12 00:25:05.989610 systemd[1]: Started session-7.scope. Jul 12 00:25:05.998000 audit[2224]: USER_START pid=2224 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:25:06.001000 audit[2227]: CRED_ACQ pid=2227 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:25:06.094000 audit[2228]: USER_ACCT pid=2228 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 12 00:25:06.096968 sudo[2228]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 00:25:06.096000 audit[2228]: CRED_REFR pid=2228 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 12 00:25:06.098156 sudo[2228]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 12 00:25:06.100000 audit[2228]: USER_START pid=2228 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 12 00:25:06.179273 systemd[1]: Starting docker.service... Jul 12 00:25:06.295721 env[2238]: time="2025-07-12T00:25:06.295626683Z" level=info msg="Starting up" Jul 12 00:25:06.298691 env[2238]: time="2025-07-12T00:25:06.298644017Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 12 00:25:06.298916 env[2238]: time="2025-07-12T00:25:06.298882221Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 12 00:25:06.299051 env[2238]: time="2025-07-12T00:25:06.299018719Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 12 00:25:06.299157 env[2238]: time="2025-07-12T00:25:06.299131288Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 12 00:25:06.303715 env[2238]: time="2025-07-12T00:25:06.303644485Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 12 00:25:06.303715 env[2238]: time="2025-07-12T00:25:06.303689434Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 12 00:25:06.303954 env[2238]: time="2025-07-12T00:25:06.303728382Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 12 00:25:06.303954 env[2238]: time="2025-07-12T00:25:06.303753497Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 12 00:25:06.318330 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4219425286-merged.mount: Deactivated successfully. Jul 12 00:25:06.619300 env[2238]: time="2025-07-12T00:25:06.619213348Z" level=warning msg="Your kernel does not support cgroup blkio weight" Jul 12 00:25:06.619586 env[2238]: time="2025-07-12T00:25:06.619558851Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Jul 12 00:25:06.620007 env[2238]: time="2025-07-12T00:25:06.619979352Z" level=info msg="Loading containers: start." Jul 12 00:25:06.720000 audit[2269]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2269 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:06.720000 audit[2269]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffff5ee19c0 a2=0 a3=1 items=0 ppid=2238 pid=2269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:06.720000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jul 12 00:25:06.724000 audit[2271]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2271 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:06.724000 audit[2271]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc51e8b80 a2=0 a3=1 items=0 ppid=2238 pid=2271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:06.724000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jul 12 00:25:06.728000 audit[2273]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2273 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:06.728000 audit[2273]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff2da4b20 a2=0 a3=1 items=0 ppid=2238 pid=2273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:06.728000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 12 00:25:06.732000 audit[2275]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2275 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:06.732000 audit[2275]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc8158c90 a2=0 a3=1 items=0 ppid=2238 pid=2275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:06.732000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 12 00:25:06.739000 audit[2277]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2277 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:06.739000 audit[2277]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffdc03dae0 a2=0 a3=1 items=0 ppid=2238 pid=2277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:06.739000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jul 12 00:25:06.772000 audit[2282]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=2282 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:06.772000 audit[2282]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff8206300 a2=0 a3=1 items=0 ppid=2238 pid=2282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:06.772000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jul 12 00:25:06.783000 audit[2284]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2284 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:06.783000 audit[2284]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe15862e0 a2=0 a3=1 items=0 ppid=2238 pid=2284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:06.783000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jul 12 00:25:06.787000 audit[2286]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2286 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:06.787000 audit[2286]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffeff6d8e0 a2=0 a3=1 items=0 ppid=2238 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:06.787000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jul 12 00:25:06.791000 audit[2288]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=2288 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:06.791000 audit[2288]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffdb30b720 a2=0 a3=1 items=0 ppid=2238 pid=2288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:06.791000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 12 00:25:06.803000 audit[2292]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=2292 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:06.803000 audit[2292]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=fffff69aa360 a2=0 a3=1 items=0 ppid=2238 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:06.803000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 12 00:25:06.811000 audit[2293]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2293 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:06.811000 audit[2293]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffcbe69550 a2=0 a3=1 items=0 ppid=2238 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:06.811000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 12 00:25:06.830295 kernel: Initializing XFRM netlink socket Jul 12 00:25:06.875800 env[2238]: time="2025-07-12T00:25:06.875662524Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 12 00:25:06.879524 (udev-worker)[2249]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:25:06.913000 audit[2301]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2301 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:06.913000 audit[2301]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=fffff16887a0 a2=0 a3=1 items=0 ppid=2238 pid=2301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:06.913000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jul 12 00:25:06.936000 audit[2304]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2304 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:06.936000 audit[2304]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffd68fc5f0 a2=0 a3=1 items=0 ppid=2238 pid=2304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:06.936000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jul 12 00:25:06.942000 audit[2307]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2307 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:06.942000 audit[2307]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffd9cbf500 a2=0 a3=1 items=0 ppid=2238 pid=2307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:06.942000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jul 12 00:25:06.946000 audit[2309]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2309 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:06.946000 audit[2309]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffdfdc9230 a2=0 a3=1 items=0 ppid=2238 pid=2309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:06.946000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jul 12 00:25:06.950000 audit[2311]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=2311 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:06.950000 audit[2311]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffcae65210 a2=0 a3=1 items=0 ppid=2238 pid=2311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:06.950000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jul 12 00:25:06.954000 audit[2313]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=2313 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:06.954000 audit[2313]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffc49c2390 a2=0 a3=1 items=0 ppid=2238 pid=2313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:06.954000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jul 12 00:25:06.958000 audit[2315]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2315 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:06.958000 audit[2315]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=ffffefb1aed0 a2=0 a3=1 items=0 ppid=2238 pid=2315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:06.958000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jul 12 00:25:06.973000 audit[2318]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=2318 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:06.973000 audit[2318]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=fffffbef30f0 a2=0 a3=1 items=0 ppid=2238 pid=2318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:06.973000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jul 12 00:25:06.977000 audit[2320]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=2320 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:06.977000 audit[2320]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffc5ddf050 a2=0 a3=1 items=0 ppid=2238 pid=2320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:06.977000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 12 00:25:06.982000 audit[2322]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2322 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:06.982000 audit[2322]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffc8e25540 a2=0 a3=1 items=0 ppid=2238 pid=2322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:06.982000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 12 00:25:06.986000 audit[2324]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2324 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:06.986000 audit[2324]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc27c1a20 a2=0 a3=1 items=0 ppid=2238 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:06.986000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jul 12 00:25:06.988784 systemd-networkd[1586]: docker0: Link UP Jul 12 00:25:07.001000 audit[2328]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=2328 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:07.001000 audit[2328]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe686df80 a2=0 a3=1 items=0 ppid=2238 pid=2328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:07.001000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 12 00:25:07.008000 audit[2329]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2329 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:07.008000 audit[2329]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffcf1a8af0 a2=0 a3=1 items=0 ppid=2238 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:07.008000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 12 00:25:07.011111 env[2238]: time="2025-07-12T00:25:07.011047109Z" level=info msg="Loading containers: done." Jul 12 00:25:07.044028 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1251437012-merged.mount: Deactivated successfully. Jul 12 00:25:07.059067 env[2238]: time="2025-07-12T00:25:07.058981828Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 00:25:07.059489 env[2238]: time="2025-07-12T00:25:07.059436981Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 12 00:25:07.059749 env[2238]: time="2025-07-12T00:25:07.059704102Z" level=info msg="Daemon has completed initialization" Jul 12 00:25:07.083617 systemd[1]: Started docker.service. Jul 12 00:25:07.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:07.100384 env[2238]: time="2025-07-12T00:25:07.100256513Z" level=info msg="API listen on /run/docker.sock" Jul 12 00:25:08.654132 env[1913]: time="2025-07-12T00:25:08.654019421Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 12 00:25:09.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:09.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:09.195368 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 00:25:09.195632 systemd[1]: Stopped kubelet.service. Jul 12 00:25:09.200197 systemd[1]: Starting kubelet.service... Jul 12 00:25:09.247395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1006265713.mount: Deactivated successfully. Jul 12 00:25:09.635291 systemd[1]: Started kubelet.service. Jul 12 00:25:09.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:09.759928 kubelet[2368]: E0712 00:25:09.759870 2368 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:25:09.767076 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:25:09.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 12 00:25:09.767522 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:25:11.040500 env[1913]: time="2025-07-12T00:25:11.040441656Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:11.042996 env[1913]: time="2025-07-12T00:25:11.042945190Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:11.046374 env[1913]: time="2025-07-12T00:25:11.046324559Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:11.049614 env[1913]: time="2025-07-12T00:25:11.049550373Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:11.051517 env[1913]: time="2025-07-12T00:25:11.051469910Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 12 00:25:11.054106 env[1913]: time="2025-07-12T00:25:11.054056221Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 12 00:25:12.840542 env[1913]: time="2025-07-12T00:25:12.840481657Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:12.843997 env[1913]: time="2025-07-12T00:25:12.843946129Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:12.848150 env[1913]: time="2025-07-12T00:25:12.848100120Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:12.851335 env[1913]: time="2025-07-12T00:25:12.851287178Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:12.854068 env[1913]: time="2025-07-12T00:25:12.854007340Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 12 00:25:12.854984 env[1913]: time="2025-07-12T00:25:12.854913953Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 12 00:25:14.313265 env[1913]: time="2025-07-12T00:25:14.313181126Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:14.315664 env[1913]: time="2025-07-12T00:25:14.315601901Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:14.319005 env[1913]: time="2025-07-12T00:25:14.318949296Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:14.323916 env[1913]: time="2025-07-12T00:25:14.323855722Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 12 00:25:14.324530 env[1913]: time="2025-07-12T00:25:14.324463104Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:14.325213 env[1913]: time="2025-07-12T00:25:14.324920359Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 12 00:25:15.694523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2967089190.mount: Deactivated successfully. Jul 12 00:25:16.605272 env[1913]: time="2025-07-12T00:25:16.605164793Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:16.615528 env[1913]: time="2025-07-12T00:25:16.615469503Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:16.621647 env[1913]: time="2025-07-12T00:25:16.621592383Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:16.625203 env[1913]: time="2025-07-12T00:25:16.625156121Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:16.626173 env[1913]: time="2025-07-12T00:25:16.626108889Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 12 00:25:16.626902 env[1913]: time="2025-07-12T00:25:16.626856371Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 12 00:25:17.159604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1720883571.mount: Deactivated successfully. Jul 12 00:25:18.560582 env[1913]: time="2025-07-12T00:25:18.560511956Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:18.569412 env[1913]: time="2025-07-12T00:25:18.569025107Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:18.573911 env[1913]: time="2025-07-12T00:25:18.573850961Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:18.579402 env[1913]: time="2025-07-12T00:25:18.579338273Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:18.580256 env[1913]: time="2025-07-12T00:25:18.580167760Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 12 00:25:18.581250 env[1913]: time="2025-07-12T00:25:18.581175414Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 00:25:19.702949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount49920299.mount: Deactivated successfully. Jul 12 00:25:19.719238 env[1913]: time="2025-07-12T00:25:19.719159622Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:19.723638 env[1913]: time="2025-07-12T00:25:19.723574290Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:19.727342 env[1913]: time="2025-07-12T00:25:19.727283383Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:19.730727 env[1913]: time="2025-07-12T00:25:19.730664569Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:19.731986 env[1913]: time="2025-07-12T00:25:19.731934538Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 12 00:25:19.732687 env[1913]: time="2025-07-12T00:25:19.732642499Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 12 00:25:20.019005 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 12 00:25:20.019365 systemd[1]: Stopped kubelet.service. Jul 12 00:25:20.030108 kernel: kauditd_printk_skb: 88 callbacks suppressed Jul 12 00:25:20.030275 kernel: audit: type=1130 audit(1752279920.018:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:20.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:20.022070 systemd[1]: Starting kubelet.service... Jul 12 00:25:20.041142 kernel: audit: type=1131 audit(1752279920.018:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:20.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:20.310562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3951658398.mount: Deactivated successfully. Jul 12 00:25:20.394375 systemd[1]: Started kubelet.service. Jul 12 00:25:20.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:20.406854 kernel: audit: type=1130 audit(1752279920.394:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:20.490734 kubelet[2382]: E0712 00:25:20.490651 2382 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:25:20.501277 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:25:20.501669 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:25:20.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 12 00:25:20.513267 kernel: audit: type=1131 audit(1752279920.502:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 12 00:25:23.122296 env[1913]: time="2025-07-12T00:25:23.122196056Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:23.125439 env[1913]: time="2025-07-12T00:25:23.125376869Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:23.129422 env[1913]: time="2025-07-12T00:25:23.129367196Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:23.133096 env[1913]: time="2025-07-12T00:25:23.133031380Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:23.135155 env[1913]: time="2025-07-12T00:25:23.135074978Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 12 00:25:25.578979 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 12 00:25:25.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:25.590287 kernel: audit: type=1131 audit(1752279925.579:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:30.630677 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 12 00:25:30.631002 systemd[1]: Stopped kubelet.service. Jul 12 00:25:30.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:30.643082 systemd[1]: Starting kubelet.service... Jul 12 00:25:30.661194 kernel: audit: type=1130 audit(1752279930.629:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:30.661359 kernel: audit: type=1131 audit(1752279930.629:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:30.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:31.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:31.027929 systemd[1]: Started kubelet.service. Jul 12 00:25:31.040302 kernel: audit: type=1130 audit(1752279931.027:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:31.138004 kubelet[2418]: E0712 00:25:31.137941 2418 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:25:31.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 12 00:25:31.141879 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:25:31.142294 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:25:31.152253 kernel: audit: type=1131 audit(1752279931.141:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 12 00:25:31.385366 systemd[1]: Stopped kubelet.service. Jul 12 00:25:31.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:31.394208 systemd[1]: Starting kubelet.service... Jul 12 00:25:31.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:31.399253 kernel: audit: type=1130 audit(1752279931.385:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:31.412112 kernel: audit: type=1131 audit(1752279931.389:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:31.458250 systemd[1]: Reloading. Jul 12 00:25:31.682092 /usr/lib/systemd/system-generators/torcx-generator[2452]: time="2025-07-12T00:25:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 12 00:25:31.682146 /usr/lib/systemd/system-generators/torcx-generator[2452]: time="2025-07-12T00:25:31Z" level=info msg="torcx already run" Jul 12 00:25:31.876438 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:25:31.876968 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:25:31.920036 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:25:32.132077 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 12 00:25:32.132536 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 12 00:25:32.133423 systemd[1]: Stopped kubelet.service. Jul 12 00:25:32.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 12 00:25:32.144287 kernel: audit: type=1130 audit(1752279932.132:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 12 00:25:32.146480 systemd[1]: Starting kubelet.service... Jul 12 00:25:32.459021 systemd[1]: Started kubelet.service. Jul 12 00:25:32.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:32.472273 kernel: audit: type=1130 audit(1752279932.458:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:32.566607 kubelet[2527]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:25:32.566607 kubelet[2527]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 12 00:25:32.566607 kubelet[2527]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:25:32.567278 kubelet[2527]: I0712 00:25:32.566715 2527 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:25:33.749963 amazon-ssm-agent[1887]: 2025-07-12 00:25:33 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Jul 12 00:25:34.018842 kubelet[2527]: I0712 00:25:34.018532 2527 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 12 00:25:34.018842 kubelet[2527]: I0712 00:25:34.018590 2527 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:25:34.019574 kubelet[2527]: I0712 00:25:34.019164 2527 server.go:934] "Client rotation is on, will bootstrap in background" Jul 12 00:25:34.122767 kubelet[2527]: E0712 00:25:34.122709 2527 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.29.120:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.29.120:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:25:34.123476 kubelet[2527]: I0712 00:25:34.123444 2527 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:25:34.136671 kubelet[2527]: E0712 00:25:34.136603 2527 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:25:34.136671 kubelet[2527]: I0712 00:25:34.136659 2527 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:25:34.144858 kubelet[2527]: I0712 00:25:34.144799 2527 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:25:34.147515 kubelet[2527]: I0712 00:25:34.147450 2527 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 12 00:25:34.147913 kubelet[2527]: I0712 00:25:34.147833 2527 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:25:34.148398 kubelet[2527]: I0712 00:25:34.147908 2527 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-120","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 12 00:25:34.148624 kubelet[2527]: I0712 00:25:34.148543 2527 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:25:34.148624 kubelet[2527]: I0712 00:25:34.148568 2527 container_manager_linux.go:300] "Creating device plugin manager" Jul 12 00:25:34.149066 kubelet[2527]: I0712 00:25:34.149021 2527 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:25:34.158178 kubelet[2527]: I0712 00:25:34.158135 2527 kubelet.go:408] "Attempting to sync node with API server" Jul 12 00:25:34.158406 kubelet[2527]: I0712 00:25:34.158384 2527 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:25:34.158535 kubelet[2527]: I0712 00:25:34.158514 2527 kubelet.go:314] "Adding apiserver pod source" Jul 12 00:25:34.158686 kubelet[2527]: I0712 00:25:34.158664 2527 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:25:34.176265 kubelet[2527]: W0712 00:25:34.176146 2527 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.29.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-120&limit=500&resourceVersion=0": dial tcp 172.31.29.120:6443: connect: connection refused Jul 12 00:25:34.176406 kubelet[2527]: E0712 00:25:34.176279 2527 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.29.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-120&limit=500&resourceVersion=0\": dial tcp 172.31.29.120:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:25:34.179355 kubelet[2527]: I0712 00:25:34.179318 2527 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 12 00:25:34.180871 kubelet[2527]: I0712 00:25:34.180839 2527 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:25:34.181438 kubelet[2527]: W0712 00:25:34.181414 2527 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 00:25:34.183491 kubelet[2527]: I0712 00:25:34.183460 2527 server.go:1274] "Started kubelet" Jul 12 00:25:34.183948 kubelet[2527]: W0712 00:25:34.183886 2527 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.29.120:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.29.120:6443: connect: connection refused Jul 12 00:25:34.184133 kubelet[2527]: E0712 00:25:34.184096 2527 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.29.120:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.120:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:25:34.198141 kubelet[2527]: I0712 00:25:34.198064 2527 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:25:34.200048 kubelet[2527]: I0712 00:25:34.199984 2527 server.go:449] "Adding debug handlers to kubelet server" Jul 12 00:25:34.200591 kubelet[2527]: I0712 00:25:34.200513 2527 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:25:34.201309 kubelet[2527]: I0712 00:25:34.201276 2527 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:25:34.204709 kubelet[2527]: E0712 00:25:34.201817 2527 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.120:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.120:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-120.18515950f00bcc3d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-120,UID:ip-172-31-29-120,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-120,},FirstTimestamp:2025-07-12 00:25:34.183394365 +0000 UTC m=+1.697517395,LastTimestamp:2025-07-12 00:25:34.183394365 +0000 UTC m=+1.697517395,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-120,}" Jul 12 00:25:34.203000 audit[2527]: AVC avc: denied { mac_admin } for pid=2527 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:25:34.205966 kubelet[2527]: I0712 00:25:34.204858 2527 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Jul 12 00:25:34.205966 kubelet[2527]: I0712 00:25:34.205345 2527 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Jul 12 00:25:34.205966 kubelet[2527]: I0712 00:25:34.205480 2527 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:25:34.203000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 12 00:25:34.215763 kubelet[2527]: I0712 00:25:34.215726 2527 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:25:34.217690 kernel: audit: type=1400 audit(1752279934.203:210): avc: denied { mac_admin } for pid=2527 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:25:34.217836 kernel: audit: type=1401 audit(1752279934.203:210): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 12 00:25:34.203000 audit[2527]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000978c30 a1=40006bf1e8 a2=4000978c00 a3=25 items=0 ppid=1 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:34.203000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 12 00:25:34.204000 audit[2527]: AVC avc: denied { mac_admin } for pid=2527 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:25:34.204000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 12 00:25:34.204000 audit[2527]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000b40b60 a1=4000151c50 a2=4000b38f90 a3=25 items=0 ppid=1 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:34.204000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 12 00:25:34.209000 audit[2539]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2539 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:34.209000 audit[2539]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc92ffb10 a2=0 a3=1 items=0 ppid=2527 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:34.209000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 12 00:25:34.211000 audit[2540]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2540 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:34.211000 audit[2540]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc6e5fdf0 a2=0 a3=1 items=0 ppid=2527 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:34.211000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 12 00:25:34.222053 kubelet[2527]: I0712 00:25:34.222005 2527 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 12 00:25:34.224463 kubelet[2527]: I0712 00:25:34.224430 2527 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 12 00:25:34.224668 kubelet[2527]: E0712 00:25:34.223566 2527 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-120\" not found" Jul 12 00:25:34.224860 kubelet[2527]: I0712 00:25:34.224840 2527 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:25:34.224000 audit[2542]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2542 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:34.224000 audit[2542]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffdcbf0400 a2=0 a3=1 items=0 ppid=2527 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:34.224000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 12 00:25:34.226489 kubelet[2527]: W0712 00:25:34.226422 2527 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.29.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.120:6443: connect: connection refused Jul 12 00:25:34.226701 kubelet[2527]: E0712 00:25:34.226667 2527 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.29.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.120:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:25:34.226981 kubelet[2527]: E0712 00:25:34.226937 2527 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-120?timeout=10s\": dial tcp 172.31.29.120:6443: connect: connection refused" interval="200ms" Jul 12 00:25:34.227328 kubelet[2527]: E0712 00:25:34.227155 2527 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.120:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.120:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-120.18515950f00bcc3d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-120,UID:ip-172-31-29-120,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-120,},FirstTimestamp:2025-07-12 00:25:34.183394365 +0000 UTC m=+1.697517395,LastTimestamp:2025-07-12 00:25:34.183394365 +0000 UTC m=+1.697517395,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-120,}" Jul 12 00:25:34.227881 kubelet[2527]: I0712 00:25:34.227851 2527 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:25:34.228789 kubelet[2527]: I0712 00:25:34.228746 2527 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:25:34.232859 kubelet[2527]: I0712 00:25:34.232828 2527 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:25:34.232000 audit[2544]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2544 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:34.232000 audit[2544]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe08ac6c0 a2=0 a3=1 items=0 ppid=2527 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:34.232000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 12 00:25:34.249276 kubelet[2527]: E0712 00:25:34.249119 2527 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:25:34.259000 audit[2549]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2549 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:34.259000 audit[2549]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffd11de160 a2=0 a3=1 items=0 ppid=2527 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:34.259000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jul 12 00:25:34.269488 kubelet[2527]: I0712 00:25:34.269351 2527 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:25:34.272000 audit[2551]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=2551 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:25:34.272000 audit[2551]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffce3570b0 a2=0 a3=1 items=0 ppid=2527 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:34.272000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 12 00:25:34.274413 kubelet[2527]: I0712 00:25:34.274377 2527 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:25:34.274572 kubelet[2527]: I0712 00:25:34.274551 2527 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 12 00:25:34.274690 kubelet[2527]: I0712 00:25:34.274670 2527 kubelet.go:2321] "Starting kubelet main sync loop" Jul 12 00:25:34.274863 kubelet[2527]: E0712 00:25:34.274834 2527 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:25:34.275000 audit[2552]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=2552 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:34.275000 audit[2552]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc4a17f10 a2=0 a3=1 items=0 ppid=2527 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:34.275000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 12 00:25:34.277000 audit[2553]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=2553 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:34.277000 audit[2553]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd70eeb70 a2=0 a3=1 items=0 ppid=2527 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:34.277000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 12 00:25:34.280000 audit[2555]: NETFILTER_CFG table=mangle:34 family=10 entries=1 op=nft_register_chain pid=2555 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:25:34.280000 audit[2555]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffddc3f00 a2=0 a3=1 items=0 ppid=2527 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:34.280000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 12 00:25:34.281000 audit[2554]: NETFILTER_CFG table=filter:35 family=2 entries=1 op=nft_register_chain pid=2554 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:34.281000 audit[2554]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd328eda0 a2=0 a3=1 items=0 ppid=2527 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:34.281000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 12 00:25:34.283000 audit[2556]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=2556 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:25:34.283000 audit[2556]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffcbb266b0 a2=0 a3=1 items=0 ppid=2527 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:34.283000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 12 00:25:34.286175 kubelet[2527]: W0712 00:25:34.286091 2527 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.29.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.120:6443: connect: connection refused Jul 12 00:25:34.286489 kubelet[2527]: E0712 00:25:34.286447 2527 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.29.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.120:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:25:34.286000 audit[2557]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=2557 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:25:34.286000 audit[2557]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff485a6b0 a2=0 a3=1 items=0 ppid=2527 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:34.286000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 12 00:25:34.289074 kubelet[2527]: I0712 00:25:34.289030 2527 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 12 00:25:34.289074 kubelet[2527]: I0712 00:25:34.289070 2527 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 12 00:25:34.289413 kubelet[2527]: I0712 00:25:34.289104 2527 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:25:34.291550 kubelet[2527]: I0712 00:25:34.291494 2527 policy_none.go:49] "None policy: Start" Jul 12 00:25:34.292592 kubelet[2527]: I0712 00:25:34.292547 2527 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 12 00:25:34.292705 kubelet[2527]: I0712 00:25:34.292606 2527 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:25:34.304009 kubelet[2527]: I0712 00:25:34.303960 2527 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:25:34.302000 audit[2527]: AVC avc: denied { mac_admin } for pid=2527 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:25:34.302000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 12 00:25:34.302000 audit[2527]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000f70930 a1=4000f0b020 a2=4000f70900 a3=25 items=0 ppid=1 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:34.302000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 12 00:25:34.304750 kubelet[2527]: I0712 00:25:34.304714 2527 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Jul 12 00:25:34.305044 kubelet[2527]: I0712 00:25:34.305023 2527 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:25:34.305209 kubelet[2527]: I0712 00:25:34.305143 2527 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:25:34.311458 kubelet[2527]: I0712 00:25:34.311424 2527 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:25:34.318018 kubelet[2527]: E0712 00:25:34.317964 2527 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-120\" not found" Jul 12 00:25:34.408141 kubelet[2527]: I0712 00:25:34.408090 2527 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-29-120" Jul 12 00:25:34.409051 kubelet[2527]: E0712 00:25:34.408990 2527 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.29.120:6443/api/v1/nodes\": dial tcp 172.31.29.120:6443: connect: connection refused" node="ip-172-31-29-120" Jul 12 00:25:34.425670 kubelet[2527]: I0712 00:25:34.425604 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7fce54c5f51a733b966c7985b5b3bac2-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-120\" (UID: \"7fce54c5f51a733b966c7985b5b3bac2\") " pod="kube-system/kube-controller-manager-ip-172-31-29-120" Jul 12 00:25:34.425670 kubelet[2527]: I0712 00:25:34.425670 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7fce54c5f51a733b966c7985b5b3bac2-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-120\" (UID: \"7fce54c5f51a733b966c7985b5b3bac2\") " pod="kube-system/kube-controller-manager-ip-172-31-29-120" Jul 12 00:25:34.425870 kubelet[2527]: I0712 00:25:34.425713 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7fce54c5f51a733b966c7985b5b3bac2-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-120\" (UID: \"7fce54c5f51a733b966c7985b5b3bac2\") " pod="kube-system/kube-controller-manager-ip-172-31-29-120" Jul 12 00:25:34.425870 kubelet[2527]: I0712 00:25:34.425758 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/689b3dba45521c1a6a4c3bd47819cd48-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-120\" (UID: \"689b3dba45521c1a6a4c3bd47819cd48\") " pod="kube-system/kube-scheduler-ip-172-31-29-120" Jul 12 00:25:34.425870 kubelet[2527]: I0712 00:25:34.425796 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7e9b67ad54a5fc0fef9630206ca54eaf-ca-certs\") pod \"kube-apiserver-ip-172-31-29-120\" (UID: \"7e9b67ad54a5fc0fef9630206ca54eaf\") " pod="kube-system/kube-apiserver-ip-172-31-29-120" Jul 12 00:25:34.425870 kubelet[2527]: I0712 00:25:34.425837 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7e9b67ad54a5fc0fef9630206ca54eaf-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-120\" (UID: \"7e9b67ad54a5fc0fef9630206ca54eaf\") " pod="kube-system/kube-apiserver-ip-172-31-29-120" Jul 12 00:25:34.426115 kubelet[2527]: I0712 00:25:34.425873 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7e9b67ad54a5fc0fef9630206ca54eaf-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-120\" (UID: \"7e9b67ad54a5fc0fef9630206ca54eaf\") " pod="kube-system/kube-apiserver-ip-172-31-29-120" Jul 12 00:25:34.426115 kubelet[2527]: I0712 00:25:34.425914 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7fce54c5f51a733b966c7985b5b3bac2-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-120\" (UID: \"7fce54c5f51a733b966c7985b5b3bac2\") " pod="kube-system/kube-controller-manager-ip-172-31-29-120" Jul 12 00:25:34.426115 kubelet[2527]: I0712 00:25:34.425954 2527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7fce54c5f51a733b966c7985b5b3bac2-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-120\" (UID: \"7fce54c5f51a733b966c7985b5b3bac2\") " pod="kube-system/kube-controller-manager-ip-172-31-29-120" Jul 12 00:25:34.427809 kubelet[2527]: E0712 00:25:34.427742 2527 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-120?timeout=10s\": dial tcp 172.31.29.120:6443: connect: connection refused" interval="400ms" Jul 12 00:25:34.611392 kubelet[2527]: I0712 00:25:34.611211 2527 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-29-120" Jul 12 00:25:34.612580 kubelet[2527]: E0712 00:25:34.612535 2527 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.29.120:6443/api/v1/nodes\": dial tcp 172.31.29.120:6443: connect: connection refused" node="ip-172-31-29-120" Jul 12 00:25:34.693018 env[1913]: time="2025-07-12T00:25:34.692513677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-120,Uid:7e9b67ad54a5fc0fef9630206ca54eaf,Namespace:kube-system,Attempt:0,}" Jul 12 00:25:34.696280 env[1913]: time="2025-07-12T00:25:34.696197696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-120,Uid:7fce54c5f51a733b966c7985b5b3bac2,Namespace:kube-system,Attempt:0,}" Jul 12 00:25:34.700123 env[1913]: time="2025-07-12T00:25:34.699602509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-120,Uid:689b3dba45521c1a6a4c3bd47819cd48,Namespace:kube-system,Attempt:0,}" Jul 12 00:25:34.828908 kubelet[2527]: E0712 00:25:34.828831 2527 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-120?timeout=10s\": dial tcp 172.31.29.120:6443: connect: connection refused" interval="800ms" Jul 12 00:25:34.995696 kubelet[2527]: W0712 00:25:34.995600 2527 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.29.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-120&limit=500&resourceVersion=0": dial tcp 172.31.29.120:6443: connect: connection refused Jul 12 00:25:34.995856 kubelet[2527]: E0712 00:25:34.995706 2527 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.29.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-120&limit=500&resourceVersion=0\": dial tcp 172.31.29.120:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:25:35.014859 kubelet[2527]: I0712 00:25:35.014819 2527 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-29-120" Jul 12 00:25:35.015500 kubelet[2527]: E0712 00:25:35.015434 2527 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.29.120:6443/api/v1/nodes\": dial tcp 172.31.29.120:6443: connect: connection refused" node="ip-172-31-29-120" Jul 12 00:25:35.161375 kubelet[2527]: W0712 00:25:35.161292 2527 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.29.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.120:6443: connect: connection refused Jul 12 00:25:35.161959 kubelet[2527]: E0712 00:25:35.161392 2527 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.29.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.120:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:25:35.168565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1461387660.mount: Deactivated successfully. Jul 12 00:25:35.170458 env[1913]: time="2025-07-12T00:25:35.170068261Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:35.177438 env[1913]: time="2025-07-12T00:25:35.177360203Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:35.179803 env[1913]: time="2025-07-12T00:25:35.179750618Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:35.181563 env[1913]: time="2025-07-12T00:25:35.181520110Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:35.185687 env[1913]: time="2025-07-12T00:25:35.185624708Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:35.190508 env[1913]: time="2025-07-12T00:25:35.190441346Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:35.192986 env[1913]: time="2025-07-12T00:25:35.192847889Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:35.196945 env[1913]: time="2025-07-12T00:25:35.196889835Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:35.199425 env[1913]: time="2025-07-12T00:25:35.199375615Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:35.201649 env[1913]: time="2025-07-12T00:25:35.201604556Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:35.204644 env[1913]: time="2025-07-12T00:25:35.204569957Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:35.216515 env[1913]: time="2025-07-12T00:25:35.216463411Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:35.257144 env[1913]: time="2025-07-12T00:25:35.256904489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:25:35.257144 env[1913]: time="2025-07-12T00:25:35.256983126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:25:35.257144 env[1913]: time="2025-07-12T00:25:35.257010463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:25:35.259060 env[1913]: time="2025-07-12T00:25:35.258929860Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3d5781a4361d2fcdbb23b449a80a8be34b0efbb3119597b328cef84bfecd963f pid=2574 runtime=io.containerd.runc.v2 Jul 12 00:25:35.268993 env[1913]: time="2025-07-12T00:25:35.268868984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:25:35.268993 env[1913]: time="2025-07-12T00:25:35.268945581Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:25:35.269349 env[1913]: time="2025-07-12T00:25:35.269256504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:25:35.270294 env[1913]: time="2025-07-12T00:25:35.270154666Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8cd45536d37bd6ba7057f32b5199b1e4f4080e88be55fbd7de5adf7230c5741d pid=2582 runtime=io.containerd.runc.v2 Jul 12 00:25:35.276822 env[1913]: time="2025-07-12T00:25:35.276618251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:25:35.276822 env[1913]: time="2025-07-12T00:25:35.276741156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:25:35.276822 env[1913]: time="2025-07-12T00:25:35.276769285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:25:35.277602 env[1913]: time="2025-07-12T00:25:35.277505637Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f03369749361fd5862a9ccd9f9ec07ba37834258f7a28385ffb26685e76ec36 pid=2597 runtime=io.containerd.runc.v2 Jul 12 00:25:35.471207 env[1913]: time="2025-07-12T00:25:35.470602772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-120,Uid:7e9b67ad54a5fc0fef9630206ca54eaf,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d5781a4361d2fcdbb23b449a80a8be34b0efbb3119597b328cef84bfecd963f\"" Jul 12 00:25:35.471207 env[1913]: time="2025-07-12T00:25:35.470859275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-120,Uid:7fce54c5f51a733b966c7985b5b3bac2,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f03369749361fd5862a9ccd9f9ec07ba37834258f7a28385ffb26685e76ec36\"" Jul 12 00:25:35.478595 env[1913]: time="2025-07-12T00:25:35.478215910Z" level=info msg="CreateContainer within sandbox \"8f03369749361fd5862a9ccd9f9ec07ba37834258f7a28385ffb26685e76ec36\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 00:25:35.478813 kubelet[2527]: W0712 00:25:35.478372 2527 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.29.120:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.29.120:6443: connect: connection refused Jul 12 00:25:35.478813 kubelet[2527]: E0712 00:25:35.478528 2527 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.29.120:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.120:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:25:35.478941 env[1913]: time="2025-07-12T00:25:35.478867181Z" level=info msg="CreateContainer within sandbox \"3d5781a4361d2fcdbb23b449a80a8be34b0efbb3119597b328cef84bfecd963f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 00:25:35.496879 env[1913]: time="2025-07-12T00:25:35.495834772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-120,Uid:689b3dba45521c1a6a4c3bd47819cd48,Namespace:kube-system,Attempt:0,} returns sandbox id \"8cd45536d37bd6ba7057f32b5199b1e4f4080e88be55fbd7de5adf7230c5741d\"" Jul 12 00:25:35.497297 kubelet[2527]: W0712 00:25:35.497186 2527 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.29.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.120:6443: connect: connection refused Jul 12 00:25:35.497417 kubelet[2527]: E0712 00:25:35.497317 2527 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.29.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.120:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:25:35.500796 env[1913]: time="2025-07-12T00:25:35.500640850Z" level=info msg="CreateContainer within sandbox \"8cd45536d37bd6ba7057f32b5199b1e4f4080e88be55fbd7de5adf7230c5741d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 00:25:35.512562 env[1913]: time="2025-07-12T00:25:35.512389054Z" level=info msg="CreateContainer within sandbox \"8f03369749361fd5862a9ccd9f9ec07ba37834258f7a28385ffb26685e76ec36\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c282e48d63ff3a66722e8c7e4666d70a83d1f6a6d693739a252fab0b8ba0abe8\"" Jul 12 00:25:35.514856 env[1913]: time="2025-07-12T00:25:35.514798705Z" level=info msg="StartContainer for \"c282e48d63ff3a66722e8c7e4666d70a83d1f6a6d693739a252fab0b8ba0abe8\"" Jul 12 00:25:35.530661 env[1913]: time="2025-07-12T00:25:35.530598239Z" level=info msg="CreateContainer within sandbox \"3d5781a4361d2fcdbb23b449a80a8be34b0efbb3119597b328cef84bfecd963f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ec6af51de8af829b4ff6ad6da3a685eba717a5b26a0256c3920c5fbde061989d\"" Jul 12 00:25:35.531840 env[1913]: time="2025-07-12T00:25:35.531789600Z" level=info msg="StartContainer for \"ec6af51de8af829b4ff6ad6da3a685eba717a5b26a0256c3920c5fbde061989d\"" Jul 12 00:25:35.535629 env[1913]: time="2025-07-12T00:25:35.535560486Z" level=info msg="CreateContainer within sandbox \"8cd45536d37bd6ba7057f32b5199b1e4f4080e88be55fbd7de5adf7230c5741d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"292358489d61335d1708557931706312612edee8479aed650b2afd8ef61bb125\"" Jul 12 00:25:35.536664 env[1913]: time="2025-07-12T00:25:35.536609286Z" level=info msg="StartContainer for \"292358489d61335d1708557931706312612edee8479aed650b2afd8ef61bb125\"" Jul 12 00:25:35.630108 kubelet[2527]: E0712 00:25:35.630026 2527 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-120?timeout=10s\": dial tcp 172.31.29.120:6443: connect: connection refused" interval="1.6s" Jul 12 00:25:35.727704 env[1913]: time="2025-07-12T00:25:35.727638966Z" level=info msg="StartContainer for \"c282e48d63ff3a66722e8c7e4666d70a83d1f6a6d693739a252fab0b8ba0abe8\" returns successfully" Jul 12 00:25:35.773659 env[1913]: time="2025-07-12T00:25:35.773518170Z" level=info msg="StartContainer for \"ec6af51de8af829b4ff6ad6da3a685eba717a5b26a0256c3920c5fbde061989d\" returns successfully" Jul 12 00:25:35.793819 env[1913]: time="2025-07-12T00:25:35.793756289Z" level=info msg="StartContainer for \"292358489d61335d1708557931706312612edee8479aed650b2afd8ef61bb125\" returns successfully" Jul 12 00:25:35.818748 kubelet[2527]: I0712 00:25:35.818001 2527 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-29-120" Jul 12 00:25:35.818748 kubelet[2527]: E0712 00:25:35.818697 2527 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.29.120:6443/api/v1/nodes\": dial tcp 172.31.29.120:6443: connect: connection refused" node="ip-172-31-29-120" Jul 12 00:25:37.421259 kubelet[2527]: I0712 00:25:37.421191 2527 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-29-120" Jul 12 00:25:39.179864 kubelet[2527]: I0712 00:25:39.179821 2527 apiserver.go:52] "Watching apiserver" Jul 12 00:25:39.197766 kubelet[2527]: E0712 00:25:39.197719 2527 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-29-120\" not found" node="ip-172-31-29-120" Jul 12 00:25:39.225180 kubelet[2527]: I0712 00:25:39.225126 2527 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 12 00:25:39.367146 kubelet[2527]: I0712 00:25:39.367089 2527 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-29-120" Jul 12 00:25:40.413348 update_engine[1906]: I0712 00:25:40.413287 1906 update_attempter.cc:509] Updating boot flags... Jul 12 00:25:41.967812 systemd[1]: Reloading. Jul 12 00:25:42.131169 /usr/lib/systemd/system-generators/torcx-generator[2914]: time="2025-07-12T00:25:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 12 00:25:42.131249 /usr/lib/systemd/system-generators/torcx-generator[2914]: time="2025-07-12T00:25:42Z" level=info msg="torcx already run" Jul 12 00:25:42.341058 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:25:42.341095 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:25:42.389546 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:25:42.632674 systemd[1]: Stopping kubelet.service... Jul 12 00:25:42.656396 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:25:42.657038 systemd[1]: Stopped kubelet.service. Jul 12 00:25:42.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:42.659180 kernel: kauditd_printk_skb: 46 callbacks suppressed Jul 12 00:25:42.659309 kernel: audit: type=1131 audit(1752279942.655:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:42.668631 systemd[1]: Starting kubelet.service... Jul 12 00:25:42.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:42.998323 systemd[1]: Started kubelet.service. Jul 12 00:25:43.018290 kernel: audit: type=1130 audit(1752279942.997:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:25:43.172521 kubelet[2983]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:25:43.172521 kubelet[2983]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 12 00:25:43.172521 kubelet[2983]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:25:43.173174 kubelet[2983]: I0712 00:25:43.172642 2983 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:25:43.188266 kubelet[2983]: I0712 00:25:43.187610 2983 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 12 00:25:43.188266 kubelet[2983]: I0712 00:25:43.187662 2983 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:25:43.188266 kubelet[2983]: I0712 00:25:43.188170 2983 server.go:934] "Client rotation is on, will bootstrap in background" Jul 12 00:25:43.191110 kubelet[2983]: I0712 00:25:43.191039 2983 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 12 00:25:43.195774 kubelet[2983]: I0712 00:25:43.195122 2983 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:25:43.214296 kubelet[2983]: E0712 00:25:43.213483 2983 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:25:43.214296 kubelet[2983]: I0712 00:25:43.213539 2983 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:25:43.218947 kubelet[2983]: I0712 00:25:43.218148 2983 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:25:43.218947 kubelet[2983]: I0712 00:25:43.218893 2983 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 12 00:25:43.219178 kubelet[2983]: I0712 00:25:43.219102 2983 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:25:43.220386 kubelet[2983]: I0712 00:25:43.219150 2983 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-120","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 12 00:25:43.220386 kubelet[2983]: I0712 00:25:43.219517 2983 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:25:43.220386 kubelet[2983]: I0712 00:25:43.219540 2983 container_manager_linux.go:300] "Creating device plugin manager" Jul 12 00:25:43.220386 kubelet[2983]: I0712 00:25:43.219623 2983 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:25:43.220386 kubelet[2983]: I0712 00:25:43.219795 2983 kubelet.go:408] "Attempting to sync node with API server" Jul 12 00:25:43.221003 kubelet[2983]: I0712 00:25:43.219819 2983 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:25:43.221003 kubelet[2983]: I0712 00:25:43.219850 2983 kubelet.go:314] "Adding apiserver pod source" Jul 12 00:25:43.221003 kubelet[2983]: I0712 00:25:43.219876 2983 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:25:43.221824 kubelet[2983]: I0712 00:25:43.221780 2983 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 12 00:25:43.223245 kubelet[2983]: I0712 00:25:43.222691 2983 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:25:43.223427 kubelet[2983]: I0712 00:25:43.223395 2983 server.go:1274] "Started kubelet" Jul 12 00:25:43.231000 audit[2983]: AVC avc: denied { mac_admin } for pid=2983 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:25:43.240973 kubelet[2983]: I0712 00:25:43.232744 2983 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Jul 12 00:25:43.240973 kubelet[2983]: I0712 00:25:43.232814 2983 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Jul 12 00:25:43.240973 kubelet[2983]: I0712 00:25:43.232857 2983 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:25:43.241414 kubelet[2983]: I0712 00:25:43.241367 2983 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:25:43.251502 kernel: audit: type=1400 audit(1752279943.231:227): avc: denied { mac_admin } for pid=2983 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:25:43.251648 kernel: audit: type=1401 audit(1752279943.231:227): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 12 00:25:43.231000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 12 00:25:43.251765 kubelet[2983]: I0712 00:25:43.251508 2983 server.go:449] "Adding debug handlers to kubelet server" Jul 12 00:25:43.265824 kernel: audit: type=1300 audit(1752279943.231:227): arch=c00000b7 syscall=5 success=no exit=-22 a0=40008b17a0 a1=4000862c78 a2=40008b1770 a3=25 items=0 ppid=1 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:43.231000 audit[2983]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40008b17a0 a1=4000862c78 a2=40008b1770 a3=25 items=0 ppid=1 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:43.266137 kubelet[2983]: I0712 00:25:43.254213 2983 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:25:43.266137 kubelet[2983]: I0712 00:25:43.261793 2983 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:25:43.266137 kubelet[2983]: I0712 00:25:43.262880 2983 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:25:43.231000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 12 00:25:43.231000 audit[2983]: AVC avc: denied { mac_admin } for pid=2983 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:25:43.289199 kernel: audit: type=1327 audit(1752279943.231:227): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 12 00:25:43.289324 kernel: audit: type=1400 audit(1752279943.231:228): avc: denied { mac_admin } for pid=2983 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:25:43.231000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 12 00:25:43.295580 kubelet[2983]: I0712 00:25:43.294485 2983 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 12 00:25:43.295580 kubelet[2983]: E0712 00:25:43.294847 2983 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-120\" not found" Jul 12 00:25:43.295771 kubelet[2983]: I0712 00:25:43.295705 2983 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 12 00:25:43.295965 kubelet[2983]: I0712 00:25:43.295929 2983 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:25:43.231000 audit[2983]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=400086ef20 a1=4000862c90 a2=40008b1830 a3=25 items=0 ppid=1 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:43.315324 kernel: audit: type=1401 audit(1752279943.231:228): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 12 00:25:43.315433 kernel: audit: type=1300 audit(1752279943.231:228): arch=c00000b7 syscall=5 success=no exit=-22 a0=400086ef20 a1=4000862c90 a2=40008b1830 a3=25 items=0 ppid=1 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:43.326636 kubelet[2983]: I0712 00:25:43.325671 2983 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:25:43.326636 kubelet[2983]: I0712 00:25:43.325973 2983 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:25:43.231000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 12 00:25:43.342439 kernel: audit: type=1327 audit(1752279943.231:228): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 12 00:25:43.342549 kubelet[2983]: E0712 00:25:43.333762 2983 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:25:43.349335 kubelet[2983]: I0712 00:25:43.347152 2983 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:25:43.380954 kubelet[2983]: I0712 00:25:43.380856 2983 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:25:43.383538 kubelet[2983]: I0712 00:25:43.383486 2983 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:25:43.383699 kubelet[2983]: I0712 00:25:43.383574 2983 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 12 00:25:43.383699 kubelet[2983]: I0712 00:25:43.383641 2983 kubelet.go:2321] "Starting kubelet main sync loop" Jul 12 00:25:43.383814 kubelet[2983]: E0712 00:25:43.383739 2983 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:25:43.484146 kubelet[2983]: E0712 00:25:43.484086 2983 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 12 00:25:43.508795 kubelet[2983]: I0712 00:25:43.508757 2983 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 12 00:25:43.509015 kubelet[2983]: I0712 00:25:43.508988 2983 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 12 00:25:43.509182 kubelet[2983]: I0712 00:25:43.509162 2983 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:25:43.509819 kubelet[2983]: I0712 00:25:43.509773 2983 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 00:25:43.510037 kubelet[2983]: I0712 00:25:43.509964 2983 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 00:25:43.510187 kubelet[2983]: I0712 00:25:43.510167 2983 policy_none.go:49] "None policy: Start" Jul 12 00:25:43.518536 kubelet[2983]: I0712 00:25:43.518389 2983 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 12 00:25:43.518863 kubelet[2983]: I0712 00:25:43.518825 2983 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:25:43.522044 kubelet[2983]: I0712 00:25:43.519206 2983 state_mem.go:75] "Updated machine memory state" Jul 12 00:25:43.522943 kubelet[2983]: I0712 00:25:43.522883 2983 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:25:43.521000 audit[2983]: AVC avc: denied { mac_admin } for pid=2983 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:25:43.521000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 12 00:25:43.521000 audit[2983]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4001240780 a1=40012425a0 a2=4001240750 a3=25 items=0 ppid=1 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:43.521000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 12 00:25:43.525058 kubelet[2983]: I0712 00:25:43.523002 2983 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Jul 12 00:25:43.525058 kubelet[2983]: I0712 00:25:43.523303 2983 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:25:43.525058 kubelet[2983]: I0712 00:25:43.523325 2983 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:25:43.535947 kubelet[2983]: I0712 00:25:43.535909 2983 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:25:43.646850 kubelet[2983]: I0712 00:25:43.643751 2983 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-29-120" Jul 12 00:25:43.666292 kubelet[2983]: I0712 00:25:43.666246 2983 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-29-120" Jul 12 00:25:43.666667 kubelet[2983]: I0712 00:25:43.666643 2983 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-29-120" Jul 12 00:25:43.704728 kubelet[2983]: I0712 00:25:43.701614 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7e9b67ad54a5fc0fef9630206ca54eaf-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-120\" (UID: \"7e9b67ad54a5fc0fef9630206ca54eaf\") " pod="kube-system/kube-apiserver-ip-172-31-29-120" Jul 12 00:25:43.704728 kubelet[2983]: I0712 00:25:43.701717 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7fce54c5f51a733b966c7985b5b3bac2-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-120\" (UID: \"7fce54c5f51a733b966c7985b5b3bac2\") " pod="kube-system/kube-controller-manager-ip-172-31-29-120" Jul 12 00:25:43.704728 kubelet[2983]: I0712 00:25:43.701791 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7fce54c5f51a733b966c7985b5b3bac2-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-120\" (UID: \"7fce54c5f51a733b966c7985b5b3bac2\") " pod="kube-system/kube-controller-manager-ip-172-31-29-120" Jul 12 00:25:43.704728 kubelet[2983]: I0712 00:25:43.701883 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7fce54c5f51a733b966c7985b5b3bac2-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-120\" (UID: \"7fce54c5f51a733b966c7985b5b3bac2\") " pod="kube-system/kube-controller-manager-ip-172-31-29-120" Jul 12 00:25:43.704728 kubelet[2983]: I0712 00:25:43.701954 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7fce54c5f51a733b966c7985b5b3bac2-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-120\" (UID: \"7fce54c5f51a733b966c7985b5b3bac2\") " pod="kube-system/kube-controller-manager-ip-172-31-29-120" Jul 12 00:25:43.705153 kubelet[2983]: I0712 00:25:43.702019 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/689b3dba45521c1a6a4c3bd47819cd48-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-120\" (UID: \"689b3dba45521c1a6a4c3bd47819cd48\") " pod="kube-system/kube-scheduler-ip-172-31-29-120" Jul 12 00:25:43.705153 kubelet[2983]: I0712 00:25:43.702063 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7e9b67ad54a5fc0fef9630206ca54eaf-ca-certs\") pod \"kube-apiserver-ip-172-31-29-120\" (UID: \"7e9b67ad54a5fc0fef9630206ca54eaf\") " pod="kube-system/kube-apiserver-ip-172-31-29-120" Jul 12 00:25:43.705153 kubelet[2983]: I0712 00:25:43.702130 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7e9b67ad54a5fc0fef9630206ca54eaf-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-120\" (UID: \"7e9b67ad54a5fc0fef9630206ca54eaf\") " pod="kube-system/kube-apiserver-ip-172-31-29-120" Jul 12 00:25:43.705153 kubelet[2983]: I0712 00:25:43.702200 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7fce54c5f51a733b966c7985b5b3bac2-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-120\" (UID: \"7fce54c5f51a733b966c7985b5b3bac2\") " pod="kube-system/kube-controller-manager-ip-172-31-29-120" Jul 12 00:25:44.238894 kubelet[2983]: I0712 00:25:44.238830 2983 apiserver.go:52] "Watching apiserver" Jul 12 00:25:44.296735 kubelet[2983]: I0712 00:25:44.296665 2983 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 12 00:25:44.469432 kubelet[2983]: I0712 00:25:44.469324 2983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-120" podStartSLOduration=1.4693033039999999 podStartE2EDuration="1.469303304s" podCreationTimestamp="2025-07-12 00:25:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:25:44.467662857 +0000 UTC m=+1.440863865" watchObservedRunningTime="2025-07-12 00:25:44.469303304 +0000 UTC m=+1.442504300" Jul 12 00:25:44.505389 kubelet[2983]: I0712 00:25:44.505207 2983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-120" podStartSLOduration=1.5051527569999998 podStartE2EDuration="1.505152757s" podCreationTimestamp="2025-07-12 00:25:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:25:44.485749038 +0000 UTC m=+1.458950058" watchObservedRunningTime="2025-07-12 00:25:44.505152757 +0000 UTC m=+1.478353753" Jul 12 00:25:44.523426 kubelet[2983]: I0712 00:25:44.523333 2983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-120" podStartSLOduration=1.523282462 podStartE2EDuration="1.523282462s" podCreationTimestamp="2025-07-12 00:25:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:25:44.505767413 +0000 UTC m=+1.478968397" watchObservedRunningTime="2025-07-12 00:25:44.523282462 +0000 UTC m=+1.496483470" Jul 12 00:25:46.297313 kubelet[2983]: I0712 00:25:46.297264 2983 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 00:25:46.298202 env[1913]: time="2025-07-12T00:25:46.298146936Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 00:25:46.299015 kubelet[2983]: I0712 00:25:46.298965 2983 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 00:25:47.028439 kubelet[2983]: I0712 00:25:47.028395 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6f1923f6-ef88-4199-824e-6558528fa2bf-kube-proxy\") pod \"kube-proxy-r94f4\" (UID: \"6f1923f6-ef88-4199-824e-6558528fa2bf\") " pod="kube-system/kube-proxy-r94f4" Jul 12 00:25:47.028742 kubelet[2983]: I0712 00:25:47.028704 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f1923f6-ef88-4199-824e-6558528fa2bf-lib-modules\") pod \"kube-proxy-r94f4\" (UID: \"6f1923f6-ef88-4199-824e-6558528fa2bf\") " pod="kube-system/kube-proxy-r94f4" Jul 12 00:25:47.028914 kubelet[2983]: I0712 00:25:47.028880 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwlhz\" (UniqueName: \"kubernetes.io/projected/6f1923f6-ef88-4199-824e-6558528fa2bf-kube-api-access-kwlhz\") pod \"kube-proxy-r94f4\" (UID: \"6f1923f6-ef88-4199-824e-6558528fa2bf\") " pod="kube-system/kube-proxy-r94f4" Jul 12 00:25:47.029096 kubelet[2983]: I0712 00:25:47.029052 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f1923f6-ef88-4199-824e-6558528fa2bf-xtables-lock\") pod \"kube-proxy-r94f4\" (UID: \"6f1923f6-ef88-4199-824e-6558528fa2bf\") " pod="kube-system/kube-proxy-r94f4" Jul 12 00:25:47.143664 kubelet[2983]: E0712 00:25:47.143621 2983 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 12 00:25:47.143878 kubelet[2983]: E0712 00:25:47.143854 2983 projected.go:194] Error preparing data for projected volume kube-api-access-kwlhz for pod kube-system/kube-proxy-r94f4: configmap "kube-root-ca.crt" not found Jul 12 00:25:47.144106 kubelet[2983]: E0712 00:25:47.144081 2983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f1923f6-ef88-4199-824e-6558528fa2bf-kube-api-access-kwlhz podName:6f1923f6-ef88-4199-824e-6558528fa2bf nodeName:}" failed. No retries permitted until 2025-07-12 00:25:47.644048827 +0000 UTC m=+4.617249811 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kwlhz" (UniqueName: "kubernetes.io/projected/6f1923f6-ef88-4199-824e-6558528fa2bf-kube-api-access-kwlhz") pod "kube-proxy-r94f4" (UID: "6f1923f6-ef88-4199-824e-6558528fa2bf") : configmap "kube-root-ca.crt" not found Jul 12 00:25:47.431759 kubelet[2983]: I0712 00:25:47.431698 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d1fdad17-8e7b-489f-a66c-f53b55686f7a-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-9rrnd\" (UID: \"d1fdad17-8e7b-489f-a66c-f53b55686f7a\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-9rrnd" Jul 12 00:25:47.432372 kubelet[2983]: I0712 00:25:47.431771 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qgcm\" (UniqueName: \"kubernetes.io/projected/d1fdad17-8e7b-489f-a66c-f53b55686f7a-kube-api-access-6qgcm\") pod \"tigera-operator-5bf8dfcb4-9rrnd\" (UID: \"d1fdad17-8e7b-489f-a66c-f53b55686f7a\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-9rrnd" Jul 12 00:25:47.543158 kubelet[2983]: I0712 00:25:47.543095 2983 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 12 00:25:47.698497 env[1913]: time="2025-07-12T00:25:47.698048061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-9rrnd,Uid:d1fdad17-8e7b-489f-a66c-f53b55686f7a,Namespace:tigera-operator,Attempt:0,}" Jul 12 00:25:47.747284 env[1913]: time="2025-07-12T00:25:47.744521326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:25:47.747284 env[1913]: time="2025-07-12T00:25:47.744686147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:25:47.747284 env[1913]: time="2025-07-12T00:25:47.744774312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:25:47.747284 env[1913]: time="2025-07-12T00:25:47.745312587Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f9809ac5a26935b231c54e0ca73743f80522bca9038941772e99f534fe54d39 pid=3036 runtime=io.containerd.runc.v2 Jul 12 00:25:47.862302 env[1913]: time="2025-07-12T00:25:47.862247016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r94f4,Uid:6f1923f6-ef88-4199-824e-6558528fa2bf,Namespace:kube-system,Attempt:0,}" Jul 12 00:25:47.874029 env[1913]: time="2025-07-12T00:25:47.873971185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-9rrnd,Uid:d1fdad17-8e7b-489f-a66c-f53b55686f7a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8f9809ac5a26935b231c54e0ca73743f80522bca9038941772e99f534fe54d39\"" Jul 12 00:25:47.879896 env[1913]: time="2025-07-12T00:25:47.879838994Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 12 00:25:47.911390 env[1913]: time="2025-07-12T00:25:47.911149757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:25:47.912476 env[1913]: time="2025-07-12T00:25:47.911413795Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:25:47.912476 env[1913]: time="2025-07-12T00:25:47.911450611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:25:47.912896 env[1913]: time="2025-07-12T00:25:47.912783267Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a5976d9d92457a5d90e0d27399c92af92c2f340717ed5f613c6efe2829bee6d0 pid=3078 runtime=io.containerd.runc.v2 Jul 12 00:25:47.999271 env[1913]: time="2025-07-12T00:25:47.999066661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r94f4,Uid:6f1923f6-ef88-4199-824e-6558528fa2bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5976d9d92457a5d90e0d27399c92af92c2f340717ed5f613c6efe2829bee6d0\"" Jul 12 00:25:48.006852 env[1913]: time="2025-07-12T00:25:48.006793436Z" level=info msg="CreateContainer within sandbox \"a5976d9d92457a5d90e0d27399c92af92c2f340717ed5f613c6efe2829bee6d0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 00:25:48.038208 env[1913]: time="2025-07-12T00:25:48.038137995Z" level=info msg="CreateContainer within sandbox \"a5976d9d92457a5d90e0d27399c92af92c2f340717ed5f613c6efe2829bee6d0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7b78e75c63554d4626a6185aff093138c7d0b83ed0916036b4cd06a591a1fe2f\"" Jul 12 00:25:48.041023 env[1913]: time="2025-07-12T00:25:48.038969540Z" level=info msg="StartContainer for \"7b78e75c63554d4626a6185aff093138c7d0b83ed0916036b4cd06a591a1fe2f\"" Jul 12 00:25:48.154817 env[1913]: time="2025-07-12T00:25:48.154736642Z" level=info msg="StartContainer for \"7b78e75c63554d4626a6185aff093138c7d0b83ed0916036b4cd06a591a1fe2f\" returns successfully" Jul 12 00:25:48.424000 audit[3182]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=3182 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:48.426718 kernel: kauditd_printk_skb: 4 callbacks suppressed Jul 12 00:25:48.426814 kernel: audit: type=1325 audit(1752279948.424:230): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=3182 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:48.424000 audit[3182]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff257b990 a2=0 a3=1 items=0 ppid=3133 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.426000 audit[3183]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=3183 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:25:48.456499 kernel: audit: type=1300 audit(1752279948.424:230): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff257b990 a2=0 a3=1 items=0 ppid=3133 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.456616 kernel: audit: type=1325 audit(1752279948.426:231): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=3183 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:25:48.426000 audit[3183]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd1e6e810 a2=0 a3=1 items=0 ppid=3133 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.458461 kernel: audit: type=1300 audit(1752279948.426:231): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd1e6e810 a2=0 a3=1 items=0 ppid=3133 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.426000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 12 00:25:48.481198 kernel: audit: type=1327 audit(1752279948.426:231): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 12 00:25:48.424000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 12 00:25:48.488762 kernel: audit: type=1327 audit(1752279948.424:230): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 12 00:25:48.452000 audit[3187]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=3187 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:25:48.495257 kernel: audit: type=1325 audit(1752279948.452:232): table=nat:40 family=10 entries=1 op=nft_register_chain pid=3187 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:25:48.452000 audit[3187]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe7793790 a2=0 a3=1 items=0 ppid=3133 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.520392 kernel: audit: type=1300 audit(1752279948.452:232): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe7793790 a2=0 a3=1 items=0 ppid=3133 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.520543 kernel: audit: type=1327 audit(1752279948.452:232): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 12 00:25:48.452000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 12 00:25:48.502000 audit[3188]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=3188 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:25:48.534190 kernel: audit: type=1325 audit(1752279948.502:233): table=filter:41 family=10 entries=1 op=nft_register_chain pid=3188 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:25:48.502000 audit[3188]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffb64d210 a2=0 a3=1 items=0 ppid=3133 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.502000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 12 00:25:48.518000 audit[3186]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=3186 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:48.518000 audit[3186]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffed84bcb0 a2=0 a3=1 items=0 ppid=3133 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.518000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 12 00:25:48.527000 audit[3189]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=3189 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:48.527000 audit[3189]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffda6fda80 a2=0 a3=1 items=0 ppid=3133 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.527000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 12 00:25:48.550000 audit[3190]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=3190 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:48.550000 audit[3190]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe8916f60 a2=0 a3=1 items=0 ppid=3133 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.550000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 12 00:25:48.564000 audit[3192]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=3192 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:48.564000 audit[3192]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffce3eaf10 a2=0 a3=1 items=0 ppid=3133 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.564000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jul 12 00:25:48.572000 audit[3195]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=3195 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:48.572000 audit[3195]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffcd47e000 a2=0 a3=1 items=0 ppid=3133 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.572000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jul 12 00:25:48.575000 audit[3196]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=3196 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:48.575000 audit[3196]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe1b83c60 a2=0 a3=1 items=0 ppid=3133 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.575000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 12 00:25:48.580000 audit[3198]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=3198 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:48.580000 audit[3198]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd2ab19c0 a2=0 a3=1 items=0 ppid=3133 pid=3198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.580000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 12 00:25:48.583000 audit[3199]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=3199 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:48.583000 audit[3199]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc1d93470 a2=0 a3=1 items=0 ppid=3133 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.583000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 12 00:25:48.588000 audit[3201]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=3201 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:48.588000 audit[3201]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff2df9830 a2=0 a3=1 items=0 ppid=3133 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.588000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 12 00:25:48.596000 audit[3204]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=3204 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:48.596000 audit[3204]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffff5d28b0 a2=0 a3=1 items=0 ppid=3133 pid=3204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.596000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jul 12 00:25:48.599000 audit[3205]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=3205 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:48.599000 audit[3205]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffccf36f70 a2=0 a3=1 items=0 ppid=3133 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.599000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 12 00:25:48.604000 audit[3207]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=3207 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:48.604000 audit[3207]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffee9fb7f0 a2=0 a3=1 items=0 ppid=3133 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.604000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 12 00:25:48.607000 audit[3208]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=3208 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:48.607000 audit[3208]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc980c1d0 a2=0 a3=1 items=0 ppid=3133 pid=3208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.607000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 12 00:25:48.613000 audit[3210]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=3210 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:48.613000 audit[3210]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffde178020 a2=0 a3=1 items=0 ppid=3133 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.613000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 12 00:25:48.621000 audit[3213]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=3213 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:48.621000 audit[3213]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff1214840 a2=0 a3=1 items=0 ppid=3133 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.621000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 12 00:25:48.632000 audit[3216]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=3216 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:48.632000 audit[3216]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd8760970 a2=0 a3=1 items=0 ppid=3133 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.632000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 12 00:25:48.635000 audit[3217]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3217 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:48.635000 audit[3217]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc103fb60 a2=0 a3=1 items=0 ppid=3133 pid=3217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.635000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 12 00:25:48.641000 audit[3219]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=3219 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:48.641000 audit[3219]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffcd7c1f40 a2=0 a3=1 items=0 ppid=3133 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.641000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 12 00:25:48.649000 audit[3222]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=3222 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:48.649000 audit[3222]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdd106390 a2=0 a3=1 items=0 ppid=3133 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.649000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 12 00:25:48.652000 audit[3223]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=3223 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:48.652000 audit[3223]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdbe61c40 a2=0 a3=1 items=0 ppid=3133 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.652000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 12 00:25:48.658000 audit[3225]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:25:48.658000 audit[3225]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffff4cac70 a2=0 a3=1 items=0 ppid=3133 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.658000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 12 00:25:48.709000 audit[3231]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:25:48.709000 audit[3231]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcb4bd5e0 a2=0 a3=1 items=0 ppid=3133 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.709000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:25:48.726000 audit[3231]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:25:48.726000 audit[3231]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffcb4bd5e0 a2=0 a3=1 items=0 ppid=3133 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.726000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:25:48.730000 audit[3236]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=3236 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:25:48.730000 audit[3236]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc7ea8a50 a2=0 a3=1 items=0 ppid=3133 pid=3236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.730000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 12 00:25:48.737000 audit[3238]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=3238 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:25:48.737000 audit[3238]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffffc91d940 a2=0 a3=1 items=0 ppid=3133 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.737000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jul 12 00:25:48.745000 audit[3241]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=3241 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:25:48.745000 audit[3241]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc5541e90 a2=0 a3=1 items=0 ppid=3133 pid=3241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.745000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jul 12 00:25:48.747000 audit[3242]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=3242 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:25:48.747000 audit[3242]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcf048670 a2=0 a3=1 items=0 ppid=3133 pid=3242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.747000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 12 00:25:48.752000 audit[3244]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=3244 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:25:48.752000 audit[3244]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe0263ca0 a2=0 a3=1 items=0 ppid=3133 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.752000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 12 00:25:48.756000 audit[3245]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=3245 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:25:48.756000 audit[3245]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffeefb5620 a2=0 a3=1 items=0 ppid=3133 pid=3245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.756000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 12 00:25:48.762000 audit[3247]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=3247 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:25:48.762000 audit[3247]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffcac08ad0 a2=0 a3=1 items=0 ppid=3133 pid=3247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.762000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jul 12 00:25:48.770000 audit[3250]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=3250 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:25:48.770000 audit[3250]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffc2109b30 a2=0 a3=1 items=0 ppid=3133 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.770000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 12 00:25:48.773000 audit[3251]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=3251 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:25:48.773000 audit[3251]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc07d8570 a2=0 a3=1 items=0 ppid=3133 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.773000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 12 00:25:48.778000 audit[3253]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=3253 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:25:48.778000 audit[3253]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcbb1f940 a2=0 a3=1 items=0 ppid=3133 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.778000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 12 00:25:48.781000 audit[3254]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=3254 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:25:48.781000 audit[3254]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffea170950 a2=0 a3=1 items=0 ppid=3133 pid=3254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.781000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 12 00:25:48.786000 audit[3256]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=3256 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:25:48.786000 audit[3256]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdae89640 a2=0 a3=1 items=0 ppid=3133 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.786000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 12 00:25:48.794000 audit[3259]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=3259 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:25:48.794000 audit[3259]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcb9aae40 a2=0 a3=1 items=0 ppid=3133 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.794000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 12 00:25:48.801000 audit[3262]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=3262 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:25:48.801000 audit[3262]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcd50a6c0 a2=0 a3=1 items=0 ppid=3133 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.801000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jul 12 00:25:48.803000 audit[3263]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=3263 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:25:48.803000 audit[3263]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffe8f6100 a2=0 a3=1 items=0 ppid=3133 pid=3263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.803000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 12 00:25:48.810000 audit[3265]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=3265 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:25:48.810000 audit[3265]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffcfbe0500 a2=0 a3=1 items=0 ppid=3133 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.810000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 12 00:25:48.817000 audit[3268]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=3268 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:25:48.817000 audit[3268]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffe2c70fa0 a2=0 a3=1 items=0 ppid=3133 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.817000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 12 00:25:48.820000 audit[3269]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=3269 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:25:48.820000 audit[3269]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffee951cd0 a2=0 a3=1 items=0 ppid=3133 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.820000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 12 00:25:48.828000 audit[3271]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=3271 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:25:48.828000 audit[3271]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffd1e529a0 a2=0 a3=1 items=0 ppid=3133 pid=3271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.828000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 12 00:25:48.830000 audit[3272]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3272 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:25:48.830000 audit[3272]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd8c02170 a2=0 a3=1 items=0 ppid=3133 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.830000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 12 00:25:48.839000 audit[3274]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3274 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:25:48.839000 audit[3274]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd651a270 a2=0 a3=1 items=0 ppid=3133 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.839000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 12 00:25:48.848000 audit[3277]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3277 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:25:48.848000 audit[3277]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd1904260 a2=0 a3=1 items=0 ppid=3133 pid=3277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.848000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 12 00:25:48.857000 audit[3279]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=3279 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 12 00:25:48.857000 audit[3279]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffeb22c1e0 a2=0 a3=1 items=0 ppid=3133 pid=3279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.857000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:25:48.860000 audit[3279]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=3279 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 12 00:25:48.860000 audit[3279]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffeb22c1e0 a2=0 a3=1 items=0 ppid=3133 pid=3279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:25:48.860000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:25:49.259701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4285542653.mount: Deactivated successfully. Jul 12 00:25:50.807417 env[1913]: time="2025-07-12T00:25:50.807358535Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:50.812835 env[1913]: time="2025-07-12T00:25:50.812781893Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:50.815620 env[1913]: time="2025-07-12T00:25:50.815540660Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:50.819089 env[1913]: time="2025-07-12T00:25:50.819039351Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:25:50.821132 env[1913]: time="2025-07-12T00:25:50.821068790Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 12 00:25:50.830552 env[1913]: time="2025-07-12T00:25:50.830486526Z" level=info msg="CreateContainer within sandbox \"8f9809ac5a26935b231c54e0ca73743f80522bca9038941772e99f534fe54d39\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 12 00:25:50.866826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4139395496.mount: Deactivated successfully. Jul 12 00:25:50.877657 env[1913]: time="2025-07-12T00:25:50.877569756Z" level=info msg="CreateContainer within sandbox \"8f9809ac5a26935b231c54e0ca73743f80522bca9038941772e99f534fe54d39\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8f9f138908834a242680da4b5ccde53b90ae8f7aae8ee0c7cc7cce07c4f2f541\"" Jul 12 00:25:50.879996 env[1913]: time="2025-07-12T00:25:50.878540561Z" level=info msg="StartContainer for \"8f9f138908834a242680da4b5ccde53b90ae8f7aae8ee0c7cc7cce07c4f2f541\"" Jul 12 00:25:50.999034 env[1913]: time="2025-07-12T00:25:50.998970509Z" level=info msg="StartContainer for \"8f9f138908834a242680da4b5ccde53b90ae8f7aae8ee0c7cc7cce07c4f2f541\" returns successfully" Jul 12 00:25:51.500249 kubelet[2983]: I0712 00:25:51.500070 2983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r94f4" podStartSLOduration=5.500013237 podStartE2EDuration="5.500013237s" podCreationTimestamp="2025-07-12 00:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:25:48.499162158 +0000 UTC m=+5.472363178" watchObservedRunningTime="2025-07-12 00:25:51.500013237 +0000 UTC m=+8.473214233" Jul 12 00:25:51.500942 kubelet[2983]: I0712 00:25:51.500392 2983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-9rrnd" podStartSLOduration=1.553724367 podStartE2EDuration="4.500375759s" podCreationTimestamp="2025-07-12 00:25:47 +0000 UTC" firstStartedPulling="2025-07-12 00:25:47.876727399 +0000 UTC m=+4.849928395" lastFinishedPulling="2025-07-12 00:25:50.823378791 +0000 UTC m=+7.796579787" observedRunningTime="2025-07-12 00:25:51.498732182 +0000 UTC m=+8.471933190" watchObservedRunningTime="2025-07-12 00:25:51.500375759 +0000 UTC m=+8.473576767" Jul 12 00:25:51.857130 systemd[1]: run-containerd-runc-k8s.io-8f9f138908834a242680da4b5ccde53b90ae8f7aae8ee0c7cc7cce07c4f2f541-runc.XR8GVZ.mount: Deactivated successfully. Jul 12 00:26:00.089000 audit[2228]: USER_END pid=2228 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 12 00:26:00.089803 sudo[2228]: pam_unix(sudo:session): session closed for user root Jul 12 00:26:00.092378 kernel: kauditd_printk_skb: 143 callbacks suppressed Jul 12 00:26:00.092523 kernel: audit: type=1106 audit(1752279960.089:281): pid=2228 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 12 00:26:00.090000 audit[2228]: CRED_DISP pid=2228 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 12 00:26:00.119804 kernel: audit: type=1104 audit(1752279960.090:282): pid=2228 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 12 00:26:00.120823 sshd[2224]: pam_unix(sshd:session): session closed for user core Jul 12 00:26:00.124000 audit[2224]: USER_END pid=2224 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:00.138368 kernel: audit: type=1106 audit(1752279960.124:283): pid=2224 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:00.127415 systemd[1]: sshd@6-172.31.29.120:22-147.75.109.163:51652.service: Deactivated successfully. Jul 12 00:26:00.128775 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 00:26:00.140107 systemd-logind[1905]: Session 7 logged out. Waiting for processes to exit. Jul 12 00:26:00.141848 systemd-logind[1905]: Removed session 7. Jul 12 00:26:00.124000 audit[2224]: CRED_DISP pid=2224 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:00.162300 kernel: audit: type=1104 audit(1752279960.124:284): pid=2224 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:00.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.29.120:22-147.75.109.163:51652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:00.173442 kernel: audit: type=1131 audit(1752279960.126:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.29.120:22-147.75.109.163:51652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:02.684000 audit[3358]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=3358 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:02.693275 kernel: audit: type=1325 audit(1752279962.684:286): table=filter:89 family=2 entries=15 op=nft_register_rule pid=3358 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:02.684000 audit[3358]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe95ae4f0 a2=0 a3=1 items=0 ppid=3133 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:02.684000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:02.722151 kernel: audit: type=1300 audit(1752279962.684:286): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe95ae4f0 a2=0 a3=1 items=0 ppid=3133 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:02.722305 kernel: audit: type=1327 audit(1752279962.684:286): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:02.713000 audit[3358]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=3358 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:02.732391 kernel: audit: type=1325 audit(1752279962.713:287): table=nat:90 family=2 entries=12 op=nft_register_rule pid=3358 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:02.713000 audit[3358]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe95ae4f0 a2=0 a3=1 items=0 ppid=3133 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:02.749258 kernel: audit: type=1300 audit(1752279962.713:287): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe95ae4f0 a2=0 a3=1 items=0 ppid=3133 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:02.713000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:02.757000 audit[3360]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=3360 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:02.757000 audit[3360]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffde95bc10 a2=0 a3=1 items=0 ppid=3133 pid=3360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:02.757000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:02.767000 audit[3360]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=3360 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:02.767000 audit[3360]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffde95bc10 a2=0 a3=1 items=0 ppid=3133 pid=3360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:02.767000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:13.428634 kernel: kauditd_printk_skb: 7 callbacks suppressed Jul 12 00:26:13.428816 kernel: audit: type=1325 audit(1752279973.418:290): table=filter:93 family=2 entries=17 op=nft_register_rule pid=3362 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:13.418000 audit[3362]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=3362 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:13.418000 audit[3362]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff7894d20 a2=0 a3=1 items=0 ppid=3133 pid=3362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:13.442049 kernel: audit: type=1300 audit(1752279973.418:290): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff7894d20 a2=0 a3=1 items=0 ppid=3133 pid=3362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:13.418000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:13.450923 kernel: audit: type=1327 audit(1752279973.418:290): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:13.461000 audit[3362]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=3362 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:13.468284 kernel: audit: type=1325 audit(1752279973.461:291): table=nat:94 family=2 entries=12 op=nft_register_rule pid=3362 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:13.461000 audit[3362]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff7894d20 a2=0 a3=1 items=0 ppid=3133 pid=3362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:13.461000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:13.490679 kernel: audit: type=1300 audit(1752279973.461:291): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff7894d20 a2=0 a3=1 items=0 ppid=3133 pid=3362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:13.490850 kernel: audit: type=1327 audit(1752279973.461:291): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:13.601000 audit[3364]: NETFILTER_CFG table=filter:95 family=2 entries=19 op=nft_register_rule pid=3364 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:13.608275 kernel: audit: type=1325 audit(1752279973.601:292): table=filter:95 family=2 entries=19 op=nft_register_rule pid=3364 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:13.601000 audit[3364]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcfd210d0 a2=0 a3=1 items=0 ppid=3133 pid=3364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:13.630557 kubelet[2983]: I0712 00:26:13.630480 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe7761fa-3e5b-4db1-bfce-67a4e56999fa-tigera-ca-bundle\") pod \"calico-typha-76b978477b-njz55\" (UID: \"fe7761fa-3e5b-4db1-bfce-67a4e56999fa\") " pod="calico-system/calico-typha-76b978477b-njz55" Jul 12 00:26:13.631181 kubelet[2983]: I0712 00:26:13.630563 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz9nn\" (UniqueName: \"kubernetes.io/projected/fe7761fa-3e5b-4db1-bfce-67a4e56999fa-kube-api-access-lz9nn\") pod \"calico-typha-76b978477b-njz55\" (UID: \"fe7761fa-3e5b-4db1-bfce-67a4e56999fa\") " pod="calico-system/calico-typha-76b978477b-njz55" Jul 12 00:26:13.631181 kubelet[2983]: I0712 00:26:13.630607 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fe7761fa-3e5b-4db1-bfce-67a4e56999fa-typha-certs\") pod \"calico-typha-76b978477b-njz55\" (UID: \"fe7761fa-3e5b-4db1-bfce-67a4e56999fa\") " pod="calico-system/calico-typha-76b978477b-njz55" Jul 12 00:26:13.601000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:13.638941 kernel: audit: type=1300 audit(1752279973.601:292): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcfd210d0 a2=0 a3=1 items=0 ppid=3133 pid=3364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:13.639092 kernel: audit: type=1327 audit(1752279973.601:292): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:13.639000 audit[3364]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=3364 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:13.648127 kernel: audit: type=1325 audit(1752279973.639:293): table=nat:96 family=2 entries=12 op=nft_register_rule pid=3364 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:13.639000 audit[3364]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcfd210d0 a2=0 a3=1 items=0 ppid=3133 pid=3364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:13.639000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:13.836095 env[1913]: time="2025-07-12T00:26:13.835927519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76b978477b-njz55,Uid:fe7761fa-3e5b-4db1-bfce-67a4e56999fa,Namespace:calico-system,Attempt:0,}" Jul 12 00:26:13.896362 env[1913]: time="2025-07-12T00:26:13.895655907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:26:13.896362 env[1913]: time="2025-07-12T00:26:13.895739919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:26:13.896362 env[1913]: time="2025-07-12T00:26:13.895766620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:26:13.896362 env[1913]: time="2025-07-12T00:26:13.896008708Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/31f03670a9aec1cb0a69bd7310b5211185beedd4b25f5076622ed8307c003285 pid=3374 runtime=io.containerd.runc.v2 Jul 12 00:26:14.207249 env[1913]: time="2025-07-12T00:26:14.204725111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76b978477b-njz55,Uid:fe7761fa-3e5b-4db1-bfce-67a4e56999fa,Namespace:calico-system,Attempt:0,} returns sandbox id \"31f03670a9aec1cb0a69bd7310b5211185beedd4b25f5076622ed8307c003285\"" Jul 12 00:26:14.210722 env[1913]: time="2025-07-12T00:26:14.210657855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 12 00:26:14.246823 kubelet[2983]: I0712 00:26:14.246138 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/689e5db7-a02e-44bd-8511-f17f040fb75a-var-lib-calico\") pod \"calico-node-4w679\" (UID: \"689e5db7-a02e-44bd-8511-f17f040fb75a\") " pod="calico-system/calico-node-4w679" Jul 12 00:26:14.246823 kubelet[2983]: I0712 00:26:14.246211 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/689e5db7-a02e-44bd-8511-f17f040fb75a-var-run-calico\") pod \"calico-node-4w679\" (UID: \"689e5db7-a02e-44bd-8511-f17f040fb75a\") " pod="calico-system/calico-node-4w679" Jul 12 00:26:14.246823 kubelet[2983]: I0712 00:26:14.246310 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/689e5db7-a02e-44bd-8511-f17f040fb75a-flexvol-driver-host\") pod \"calico-node-4w679\" (UID: \"689e5db7-a02e-44bd-8511-f17f040fb75a\") " pod="calico-system/calico-node-4w679" Jul 12 00:26:14.246823 kubelet[2983]: I0712 00:26:14.246354 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/689e5db7-a02e-44bd-8511-f17f040fb75a-lib-modules\") pod \"calico-node-4w679\" (UID: \"689e5db7-a02e-44bd-8511-f17f040fb75a\") " pod="calico-system/calico-node-4w679" Jul 12 00:26:14.246823 kubelet[2983]: I0712 00:26:14.246408 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc2vn\" (UniqueName: \"kubernetes.io/projected/689e5db7-a02e-44bd-8511-f17f040fb75a-kube-api-access-hc2vn\") pod \"calico-node-4w679\" (UID: \"689e5db7-a02e-44bd-8511-f17f040fb75a\") " pod="calico-system/calico-node-4w679" Jul 12 00:26:14.247319 kubelet[2983]: I0712 00:26:14.246449 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/689e5db7-a02e-44bd-8511-f17f040fb75a-cni-bin-dir\") pod \"calico-node-4w679\" (UID: \"689e5db7-a02e-44bd-8511-f17f040fb75a\") " pod="calico-system/calico-node-4w679" Jul 12 00:26:14.247319 kubelet[2983]: I0712 00:26:14.246496 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/689e5db7-a02e-44bd-8511-f17f040fb75a-cni-log-dir\") pod \"calico-node-4w679\" (UID: \"689e5db7-a02e-44bd-8511-f17f040fb75a\") " pod="calico-system/calico-node-4w679" Jul 12 00:26:14.247319 kubelet[2983]: I0712 00:26:14.246585 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/689e5db7-a02e-44bd-8511-f17f040fb75a-cni-net-dir\") pod \"calico-node-4w679\" (UID: \"689e5db7-a02e-44bd-8511-f17f040fb75a\") " pod="calico-system/calico-node-4w679" Jul 12 00:26:14.247319 kubelet[2983]: I0712 00:26:14.246630 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/689e5db7-a02e-44bd-8511-f17f040fb75a-tigera-ca-bundle\") pod \"calico-node-4w679\" (UID: \"689e5db7-a02e-44bd-8511-f17f040fb75a\") " pod="calico-system/calico-node-4w679" Jul 12 00:26:14.247319 kubelet[2983]: I0712 00:26:14.246670 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/689e5db7-a02e-44bd-8511-f17f040fb75a-node-certs\") pod \"calico-node-4w679\" (UID: \"689e5db7-a02e-44bd-8511-f17f040fb75a\") " pod="calico-system/calico-node-4w679" Jul 12 00:26:14.247651 kubelet[2983]: I0712 00:26:14.246711 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/689e5db7-a02e-44bd-8511-f17f040fb75a-xtables-lock\") pod \"calico-node-4w679\" (UID: \"689e5db7-a02e-44bd-8511-f17f040fb75a\") " pod="calico-system/calico-node-4w679" Jul 12 00:26:14.247651 kubelet[2983]: I0712 00:26:14.246753 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/689e5db7-a02e-44bd-8511-f17f040fb75a-policysync\") pod \"calico-node-4w679\" (UID: \"689e5db7-a02e-44bd-8511-f17f040fb75a\") " pod="calico-system/calico-node-4w679" Jul 12 00:26:14.343398 kubelet[2983]: E0712 00:26:14.342999 2983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g7wxf" podUID="355545c7-e2b3-4e21-bab3-2e3ea1245fce" Jul 12 00:26:14.360734 kubelet[2983]: E0712 00:26:14.360679 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.360908 kubelet[2983]: W0712 00:26:14.360741 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.360908 kubelet[2983]: E0712 00:26:14.360837 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.397494 kubelet[2983]: E0712 00:26:14.397433 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.397494 kubelet[2983]: W0712 00:26:14.397482 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.397745 kubelet[2983]: E0712 00:26:14.397518 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.434366 env[1913]: time="2025-07-12T00:26:14.434291324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4w679,Uid:689e5db7-a02e-44bd-8511-f17f040fb75a,Namespace:calico-system,Attempt:0,}" Jul 12 00:26:14.435703 kubelet[2983]: E0712 00:26:14.435646 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.435703 kubelet[2983]: W0712 00:26:14.435693 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.435942 kubelet[2983]: E0712 00:26:14.435731 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.443546 kubelet[2983]: E0712 00:26:14.443457 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.443546 kubelet[2983]: W0712 00:26:14.443502 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.444143 kubelet[2983]: E0712 00:26:14.443561 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.454136 kubelet[2983]: E0712 00:26:14.454080 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.454136 kubelet[2983]: W0712 00:26:14.454123 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.454492 kubelet[2983]: E0712 00:26:14.454160 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.454659 kubelet[2983]: E0712 00:26:14.454632 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.454780 kubelet[2983]: W0712 00:26:14.454754 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.454899 kubelet[2983]: E0712 00:26:14.454874 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.457539 kubelet[2983]: E0712 00:26:14.457379 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.457539 kubelet[2983]: W0712 00:26:14.457424 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.457539 kubelet[2983]: E0712 00:26:14.457460 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.459408 kubelet[2983]: E0712 00:26:14.459362 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.459408 kubelet[2983]: W0712 00:26:14.459397 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.459614 kubelet[2983]: E0712 00:26:14.459427 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.459768 kubelet[2983]: E0712 00:26:14.459735 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.459768 kubelet[2983]: W0712 00:26:14.459762 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.459910 kubelet[2983]: E0712 00:26:14.459785 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.460092 kubelet[2983]: E0712 00:26:14.460057 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.460092 kubelet[2983]: W0712 00:26:14.460085 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.460258 kubelet[2983]: E0712 00:26:14.460107 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.460506 kubelet[2983]: E0712 00:26:14.460466 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.460506 kubelet[2983]: W0712 00:26:14.460502 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.460669 kubelet[2983]: E0712 00:26:14.460526 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.466265 kubelet[2983]: E0712 00:26:14.461724 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.466265 kubelet[2983]: W0712 00:26:14.461769 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.466265 kubelet[2983]: E0712 00:26:14.461803 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.466265 kubelet[2983]: E0712 00:26:14.462211 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.466265 kubelet[2983]: W0712 00:26:14.462279 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.466265 kubelet[2983]: E0712 00:26:14.462310 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.466265 kubelet[2983]: E0712 00:26:14.462658 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.466265 kubelet[2983]: W0712 00:26:14.462680 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.466265 kubelet[2983]: E0712 00:26:14.462705 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.466265 kubelet[2983]: E0712 00:26:14.463095 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.466935 kubelet[2983]: W0712 00:26:14.463116 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.466935 kubelet[2983]: E0712 00:26:14.463144 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.471365 kubelet[2983]: E0712 00:26:14.467598 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.471365 kubelet[2983]: W0712 00:26:14.467641 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.471365 kubelet[2983]: E0712 00:26:14.467677 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.471365 kubelet[2983]: E0712 00:26:14.469423 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.471365 kubelet[2983]: W0712 00:26:14.469455 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.471365 kubelet[2983]: E0712 00:26:14.469489 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.473806 kubelet[2983]: E0712 00:26:14.473739 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.473806 kubelet[2983]: W0712 00:26:14.473797 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.474052 kubelet[2983]: E0712 00:26:14.473833 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.474387 kubelet[2983]: E0712 00:26:14.474266 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.474387 kubelet[2983]: W0712 00:26:14.474321 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.474387 kubelet[2983]: E0712 00:26:14.474348 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.474660 kubelet[2983]: E0712 00:26:14.474641 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.474660 kubelet[2983]: W0712 00:26:14.474658 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.474774 kubelet[2983]: E0712 00:26:14.474706 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.481744 kubelet[2983]: E0712 00:26:14.475403 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.481744 kubelet[2983]: W0712 00:26:14.475441 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.481744 kubelet[2983]: E0712 00:26:14.475474 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.481744 kubelet[2983]: E0712 00:26:14.475834 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.481744 kubelet[2983]: W0712 00:26:14.475854 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.481744 kubelet[2983]: E0712 00:26:14.475876 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.481744 kubelet[2983]: E0712 00:26:14.477143 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.481744 kubelet[2983]: W0712 00:26:14.477172 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.481744 kubelet[2983]: E0712 00:26:14.477205 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.482406 kubelet[2983]: I0712 00:26:14.477300 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/355545c7-e2b3-4e21-bab3-2e3ea1245fce-kubelet-dir\") pod \"csi-node-driver-g7wxf\" (UID: \"355545c7-e2b3-4e21-bab3-2e3ea1245fce\") " pod="calico-system/csi-node-driver-g7wxf" Jul 12 00:26:14.482406 kubelet[2983]: E0712 00:26:14.477736 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.482406 kubelet[2983]: W0712 00:26:14.477760 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.482406 kubelet[2983]: E0712 00:26:14.477789 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.482406 kubelet[2983]: I0712 00:26:14.477824 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/355545c7-e2b3-4e21-bab3-2e3ea1245fce-registration-dir\") pod \"csi-node-driver-g7wxf\" (UID: \"355545c7-e2b3-4e21-bab3-2e3ea1245fce\") " pod="calico-system/csi-node-driver-g7wxf" Jul 12 00:26:14.482406 kubelet[2983]: E0712 00:26:14.478189 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.482406 kubelet[2983]: W0712 00:26:14.478214 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.482406 kubelet[2983]: E0712 00:26:14.478306 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.482896 kubelet[2983]: I0712 00:26:14.478351 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/355545c7-e2b3-4e21-bab3-2e3ea1245fce-varrun\") pod \"csi-node-driver-g7wxf\" (UID: \"355545c7-e2b3-4e21-bab3-2e3ea1245fce\") " pod="calico-system/csi-node-driver-g7wxf" Jul 12 00:26:14.482896 kubelet[2983]: E0712 00:26:14.478731 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.482896 kubelet[2983]: W0712 00:26:14.478753 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.482896 kubelet[2983]: E0712 00:26:14.478777 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.482896 kubelet[2983]: I0712 00:26:14.478810 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trdv2\" (UniqueName: \"kubernetes.io/projected/355545c7-e2b3-4e21-bab3-2e3ea1245fce-kube-api-access-trdv2\") pod \"csi-node-driver-g7wxf\" (UID: \"355545c7-e2b3-4e21-bab3-2e3ea1245fce\") " pod="calico-system/csi-node-driver-g7wxf" Jul 12 00:26:14.482896 kubelet[2983]: E0712 00:26:14.479113 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.482896 kubelet[2983]: W0712 00:26:14.479133 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.482896 kubelet[2983]: E0712 00:26:14.479154 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.483421 kubelet[2983]: I0712 00:26:14.479185 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/355545c7-e2b3-4e21-bab3-2e3ea1245fce-socket-dir\") pod \"csi-node-driver-g7wxf\" (UID: \"355545c7-e2b3-4e21-bab3-2e3ea1245fce\") " pod="calico-system/csi-node-driver-g7wxf" Jul 12 00:26:14.483421 kubelet[2983]: E0712 00:26:14.479610 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.483421 kubelet[2983]: W0712 00:26:14.479631 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.483421 kubelet[2983]: E0712 00:26:14.479653 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.483421 kubelet[2983]: E0712 00:26:14.479936 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.483421 kubelet[2983]: W0712 00:26:14.479955 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.483421 kubelet[2983]: E0712 00:26:14.479975 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.483421 kubelet[2983]: E0712 00:26:14.480368 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.483421 kubelet[2983]: W0712 00:26:14.480386 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.483929 kubelet[2983]: E0712 00:26:14.480408 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.483929 kubelet[2983]: E0712 00:26:14.480678 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.483929 kubelet[2983]: W0712 00:26:14.480694 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.483929 kubelet[2983]: E0712 00:26:14.480713 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.483929 kubelet[2983]: E0712 00:26:14.481011 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.483929 kubelet[2983]: W0712 00:26:14.481026 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.483929 kubelet[2983]: E0712 00:26:14.481045 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.483929 kubelet[2983]: E0712 00:26:14.481357 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.483929 kubelet[2983]: W0712 00:26:14.481378 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.483929 kubelet[2983]: E0712 00:26:14.481401 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.484507 kubelet[2983]: E0712 00:26:14.481955 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.484507 kubelet[2983]: W0712 00:26:14.481978 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.484507 kubelet[2983]: E0712 00:26:14.482004 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.484507 kubelet[2983]: E0712 00:26:14.482458 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.484507 kubelet[2983]: W0712 00:26:14.482478 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.484507 kubelet[2983]: E0712 00:26:14.482501 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.484507 kubelet[2983]: E0712 00:26:14.482835 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.484507 kubelet[2983]: W0712 00:26:14.482850 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.484507 kubelet[2983]: E0712 00:26:14.482870 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.484507 kubelet[2983]: E0712 00:26:14.483118 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.485044 kubelet[2983]: W0712 00:26:14.483132 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.485044 kubelet[2983]: E0712 00:26:14.483151 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.501050 env[1913]: time="2025-07-12T00:26:14.500901535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:26:14.501258 env[1913]: time="2025-07-12T00:26:14.501068144Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:26:14.501258 env[1913]: time="2025-07-12T00:26:14.501133496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:26:14.501560 env[1913]: time="2025-07-12T00:26:14.501480045Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/58241c71b6a13d6b67bd08db584f33abb101916a8e6fc064b73a4416ee64ac97 pid=3461 runtime=io.containerd.runc.v2 Jul 12 00:26:14.580252 kubelet[2983]: E0712 00:26:14.579947 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.580252 kubelet[2983]: W0712 00:26:14.579982 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.580252 kubelet[2983]: E0712 00:26:14.580015 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.581156 kubelet[2983]: E0712 00:26:14.580990 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.581156 kubelet[2983]: W0712 00:26:14.581022 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.581156 kubelet[2983]: E0712 00:26:14.581073 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.586398 kubelet[2983]: E0712 00:26:14.581538 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.586398 kubelet[2983]: W0712 00:26:14.581602 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.586398 kubelet[2983]: E0712 00:26:14.581662 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.589999 kubelet[2983]: E0712 00:26:14.587074 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.589999 kubelet[2983]: W0712 00:26:14.587113 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.589999 kubelet[2983]: E0712 00:26:14.587549 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.589999 kubelet[2983]: W0712 00:26:14.587568 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.589999 kubelet[2983]: E0712 00:26:14.587852 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.589999 kubelet[2983]: W0712 00:26:14.587867 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.589999 kubelet[2983]: E0712 00:26:14.587891 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.589999 kubelet[2983]: E0712 00:26:14.588186 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.589999 kubelet[2983]: W0712 00:26:14.588202 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.589999 kubelet[2983]: E0712 00:26:14.588250 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.589999 kubelet[2983]: E0712 00:26:14.588507 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.590730 kubelet[2983]: W0712 00:26:14.588521 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.590730 kubelet[2983]: E0712 00:26:14.588540 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.590730 kubelet[2983]: E0712 00:26:14.589014 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.590730 kubelet[2983]: W0712 00:26:14.589031 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.590730 kubelet[2983]: E0712 00:26:14.589051 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.590730 kubelet[2983]: E0712 00:26:14.589091 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.590730 kubelet[2983]: E0712 00:26:14.589365 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.590730 kubelet[2983]: W0712 00:26:14.589380 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.590730 kubelet[2983]: E0712 00:26:14.589398 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.590730 kubelet[2983]: E0712 00:26:14.589686 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.591363 kubelet[2983]: W0712 00:26:14.589702 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.591363 kubelet[2983]: E0712 00:26:14.589720 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.591363 kubelet[2983]: E0712 00:26:14.590043 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.591363 kubelet[2983]: E0712 00:26:14.590439 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.591363 kubelet[2983]: W0712 00:26:14.590456 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.591363 kubelet[2983]: E0712 00:26:14.590485 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.591363 kubelet[2983]: E0712 00:26:14.591342 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.591363 kubelet[2983]: W0712 00:26:14.591369 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.591876 kubelet[2983]: E0712 00:26:14.591396 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.595866 kubelet[2983]: E0712 00:26:14.594577 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.595866 kubelet[2983]: W0712 00:26:14.594622 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.595866 kubelet[2983]: E0712 00:26:14.594677 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.595866 kubelet[2983]: E0712 00:26:14.595143 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.595866 kubelet[2983]: W0712 00:26:14.595171 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.595866 kubelet[2983]: E0712 00:26:14.595199 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.599391 kubelet[2983]: E0712 00:26:14.597408 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.599391 kubelet[2983]: W0712 00:26:14.597468 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.599391 kubelet[2983]: E0712 00:26:14.597860 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.599391 kubelet[2983]: W0712 00:26:14.597879 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.599391 kubelet[2983]: E0712 00:26:14.598128 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.599391 kubelet[2983]: W0712 00:26:14.598141 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.599391 kubelet[2983]: E0712 00:26:14.598164 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.599391 kubelet[2983]: E0712 00:26:14.599289 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.599391 kubelet[2983]: E0712 00:26:14.599351 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.599391 kubelet[2983]: W0712 00:26:14.599372 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.600085 kubelet[2983]: E0712 00:26:14.599396 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.600280 kubelet[2983]: E0712 00:26:14.599358 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.602287 kubelet[2983]: E0712 00:26:14.601378 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.602287 kubelet[2983]: W0712 00:26:14.601416 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.602287 kubelet[2983]: E0712 00:26:14.601466 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.603585 kubelet[2983]: E0712 00:26:14.603382 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.603585 kubelet[2983]: W0712 00:26:14.603421 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.603585 kubelet[2983]: E0712 00:26:14.603489 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.608263 kubelet[2983]: E0712 00:26:14.603850 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.608263 kubelet[2983]: W0712 00:26:14.603889 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.608263 kubelet[2983]: E0712 00:26:14.605393 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.608263 kubelet[2983]: W0712 00:26:14.605420 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.608263 kubelet[2983]: E0712 00:26:14.605455 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.608263 kubelet[2983]: E0712 00:26:14.605501 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.608263 kubelet[2983]: E0712 00:26:14.607354 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.608263 kubelet[2983]: W0712 00:26:14.607385 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.608263 kubelet[2983]: E0712 00:26:14.607422 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.608263 kubelet[2983]: E0712 00:26:14.607851 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.608974 kubelet[2983]: W0712 00:26:14.607873 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.608974 kubelet[2983]: E0712 00:26:14.607897 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.632277 kubelet[2983]: E0712 00:26:14.623151 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:14.632277 kubelet[2983]: W0712 00:26:14.623188 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:14.632277 kubelet[2983]: E0712 00:26:14.623238 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:14.681104 env[1913]: time="2025-07-12T00:26:14.681033590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4w679,Uid:689e5db7-a02e-44bd-8511-f17f040fb75a,Namespace:calico-system,Attempt:0,} returns sandbox id \"58241c71b6a13d6b67bd08db584f33abb101916a8e6fc064b73a4416ee64ac97\"" Jul 12 00:26:14.682000 audit[3524]: NETFILTER_CFG table=filter:97 family=2 entries=20 op=nft_register_rule pid=3524 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:14.682000 audit[3524]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd8a5b490 a2=0 a3=1 items=0 ppid=3133 pid=3524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:14.682000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:14.690000 audit[3524]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=3524 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:14.690000 audit[3524]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd8a5b490 a2=0 a3=1 items=0 ppid=3133 pid=3524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:14.690000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:14.770099 systemd[1]: run-containerd-runc-k8s.io-31f03670a9aec1cb0a69bd7310b5211185beedd4b25f5076622ed8307c003285-runc.mNPLUK.mount: Deactivated successfully. Jul 12 00:26:15.568049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1098970071.mount: Deactivated successfully. Jul 12 00:26:15.881000 audit[3526]: NETFILTER_CFG table=filter:99 family=2 entries=20 op=nft_register_rule pid=3526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:15.881000 audit[3526]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffef2f9d10 a2=0 a3=1 items=0 ppid=3133 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:15.881000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:15.888000 audit[3526]: NETFILTER_CFG table=nat:100 family=2 entries=12 op=nft_register_rule pid=3526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:15.888000 audit[3526]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffef2f9d10 a2=0 a3=1 items=0 ppid=3133 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:15.888000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:16.384724 kubelet[2983]: E0712 00:26:16.384645 2983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g7wxf" podUID="355545c7-e2b3-4e21-bab3-2e3ea1245fce" Jul 12 00:26:16.964441 env[1913]: time="2025-07-12T00:26:16.963082356Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:16.969607 env[1913]: time="2025-07-12T00:26:16.969548153Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:16.980634 env[1913]: time="2025-07-12T00:26:16.980577610Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:16.994000 audit[3528]: NETFILTER_CFG table=filter:101 family=2 entries=21 op=nft_register_rule pid=3528 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:16.994000 audit[3528]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd41239b0 a2=0 a3=1 items=0 ppid=3133 pid=3528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:16.994000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:16.996836 env[1913]: time="2025-07-12T00:26:16.996779932Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:17.004057 env[1913]: time="2025-07-12T00:26:17.002829448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 12 00:26:17.024000 audit[3528]: NETFILTER_CFG table=nat:102 family=2 entries=12 op=nft_register_rule pid=3528 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:17.024000 audit[3528]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd41239b0 a2=0 a3=1 items=0 ppid=3133 pid=3528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:17.024000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:17.027968 env[1913]: time="2025-07-12T00:26:17.021745325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 12 00:26:17.067619 env[1913]: time="2025-07-12T00:26:17.067369958Z" level=info msg="CreateContainer within sandbox \"31f03670a9aec1cb0a69bd7310b5211185beedd4b25f5076622ed8307c003285\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 12 00:26:17.114626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3399903387.mount: Deactivated successfully. Jul 12 00:26:17.122344 env[1913]: time="2025-07-12T00:26:17.122194955Z" level=info msg="CreateContainer within sandbox \"31f03670a9aec1cb0a69bd7310b5211185beedd4b25f5076622ed8307c003285\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5fd223a4d4447c52be70667a9bec434146e51f522e238b8e0cd636d367380ac4\"" Jul 12 00:26:17.127901 env[1913]: time="2025-07-12T00:26:17.127824661Z" level=info msg="StartContainer for \"5fd223a4d4447c52be70667a9bec434146e51f522e238b8e0cd636d367380ac4\"" Jul 12 00:26:17.438634 env[1913]: time="2025-07-12T00:26:17.436863519Z" level=info msg="StartContainer for \"5fd223a4d4447c52be70667a9bec434146e51f522e238b8e0cd636d367380ac4\" returns successfully" Jul 12 00:26:17.615623 kubelet[2983]: E0712 00:26:17.615583 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.616316 kubelet[2983]: W0712 00:26:17.616276 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.616483 kubelet[2983]: E0712 00:26:17.616453 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:17.617546 kubelet[2983]: E0712 00:26:17.617511 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.617750 kubelet[2983]: W0712 00:26:17.617720 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.617926 kubelet[2983]: E0712 00:26:17.617899 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:17.618503 kubelet[2983]: E0712 00:26:17.618476 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.618716 kubelet[2983]: W0712 00:26:17.618686 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.618871 kubelet[2983]: E0712 00:26:17.618845 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:17.621528 kubelet[2983]: E0712 00:26:17.621489 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.621759 kubelet[2983]: W0712 00:26:17.621727 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.621892 kubelet[2983]: E0712 00:26:17.621866 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:17.622780 kubelet[2983]: E0712 00:26:17.622745 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.623011 kubelet[2983]: W0712 00:26:17.622978 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.623342 kubelet[2983]: E0712 00:26:17.623310 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:17.623957 kubelet[2983]: E0712 00:26:17.623921 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.624159 kubelet[2983]: W0712 00:26:17.624129 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.624398 kubelet[2983]: E0712 00:26:17.624369 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:17.625675 kubelet[2983]: E0712 00:26:17.625638 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.625876 kubelet[2983]: W0712 00:26:17.625845 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.626044 kubelet[2983]: E0712 00:26:17.626017 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:17.626623 kubelet[2983]: E0712 00:26:17.626595 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.626788 kubelet[2983]: W0712 00:26:17.626762 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.626939 kubelet[2983]: E0712 00:26:17.626911 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:17.627551 kubelet[2983]: E0712 00:26:17.627521 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.627722 kubelet[2983]: W0712 00:26:17.627694 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.627881 kubelet[2983]: E0712 00:26:17.627854 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:17.628439 kubelet[2983]: E0712 00:26:17.628411 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.628654 kubelet[2983]: W0712 00:26:17.628625 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.628785 kubelet[2983]: E0712 00:26:17.628759 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:17.630543 kubelet[2983]: E0712 00:26:17.630506 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.630745 kubelet[2983]: W0712 00:26:17.630715 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.630914 kubelet[2983]: E0712 00:26:17.630883 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:17.631647 kubelet[2983]: E0712 00:26:17.631540 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.631890 kubelet[2983]: W0712 00:26:17.631856 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.632031 kubelet[2983]: E0712 00:26:17.632001 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:17.633491 kubelet[2983]: E0712 00:26:17.633453 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.633689 kubelet[2983]: W0712 00:26:17.633658 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.634017 kubelet[2983]: E0712 00:26:17.633940 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:17.634729 kubelet[2983]: E0712 00:26:17.634695 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.634910 kubelet[2983]: W0712 00:26:17.634879 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.635438 kubelet[2983]: E0712 00:26:17.635403 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:17.636050 kubelet[2983]: E0712 00:26:17.636019 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.636309 kubelet[2983]: W0712 00:26:17.636208 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.636484 kubelet[2983]: E0712 00:26:17.636457 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:17.716070 kubelet[2983]: E0712 00:26:17.715947 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.716291 kubelet[2983]: W0712 00:26:17.716259 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.716448 kubelet[2983]: E0712 00:26:17.716422 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:17.717094 kubelet[2983]: E0712 00:26:17.717046 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.717094 kubelet[2983]: W0712 00:26:17.717084 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.717374 kubelet[2983]: E0712 00:26:17.717130 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:17.719601 kubelet[2983]: E0712 00:26:17.719548 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.719601 kubelet[2983]: W0712 00:26:17.719588 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.719879 kubelet[2983]: E0712 00:26:17.719849 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:17.720505 kubelet[2983]: E0712 00:26:17.720459 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.720505 kubelet[2983]: W0712 00:26:17.720494 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.720763 kubelet[2983]: E0712 00:26:17.720540 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:17.722480 kubelet[2983]: E0712 00:26:17.722427 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.722480 kubelet[2983]: W0712 00:26:17.722468 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.722814 kubelet[2983]: E0712 00:26:17.722517 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:17.724477 kubelet[2983]: E0712 00:26:17.724426 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.724477 kubelet[2983]: W0712 00:26:17.724465 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.724808 kubelet[2983]: E0712 00:26:17.724779 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:17.725321 kubelet[2983]: E0712 00:26:17.725275 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.725321 kubelet[2983]: W0712 00:26:17.725311 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.725594 kubelet[2983]: E0712 00:26:17.725565 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:17.726467 kubelet[2983]: E0712 00:26:17.726415 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.726467 kubelet[2983]: W0712 00:26:17.726456 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.726803 kubelet[2983]: E0712 00:26:17.726772 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:17.729551 kubelet[2983]: E0712 00:26:17.729511 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.729781 kubelet[2983]: W0712 00:26:17.729750 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.730006 kubelet[2983]: E0712 00:26:17.729976 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:17.731012 kubelet[2983]: E0712 00:26:17.730976 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.731203 kubelet[2983]: W0712 00:26:17.731174 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.731447 kubelet[2983]: E0712 00:26:17.731422 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:17.732397 kubelet[2983]: E0712 00:26:17.732366 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.732604 kubelet[2983]: W0712 00:26:17.732575 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.732827 kubelet[2983]: E0712 00:26:17.732801 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:17.736609 kubelet[2983]: E0712 00:26:17.736568 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.736818 kubelet[2983]: W0712 00:26:17.736788 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.737130 kubelet[2983]: E0712 00:26:17.737100 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:17.739328 kubelet[2983]: E0712 00:26:17.739288 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.739631 kubelet[2983]: W0712 00:26:17.739597 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.739804 kubelet[2983]: E0712 00:26:17.739777 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:17.742472 kubelet[2983]: E0712 00:26:17.742433 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.742676 kubelet[2983]: W0712 00:26:17.742646 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.743024 kubelet[2983]: E0712 00:26:17.742996 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:17.743286 kubelet[2983]: E0712 00:26:17.743263 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.743415 kubelet[2983]: W0712 00:26:17.743388 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.743585 kubelet[2983]: E0712 00:26:17.743557 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:17.744371 kubelet[2983]: E0712 00:26:17.744314 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.751178 kubelet[2983]: W0712 00:26:17.750921 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.751804 kubelet[2983]: E0712 00:26:17.751651 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:17.752979 kubelet[2983]: E0712 00:26:17.752945 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.753203 kubelet[2983]: W0712 00:26:17.753154 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.755383 kubelet[2983]: E0712 00:26:17.753472 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:17.759739 kubelet[2983]: E0712 00:26:17.755668 2983 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:26:17.759739 kubelet[2983]: W0712 00:26:17.755710 2983 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:26:17.759739 kubelet[2983]: E0712 00:26:17.755742 2983 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:26:18.301332 env[1913]: time="2025-07-12T00:26:18.301268773Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:18.303729 env[1913]: time="2025-07-12T00:26:18.303657979Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:18.306190 env[1913]: time="2025-07-12T00:26:18.306126181Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:18.308870 env[1913]: time="2025-07-12T00:26:18.308750335Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:18.310087 env[1913]: time="2025-07-12T00:26:18.310022363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 12 00:26:18.318841 env[1913]: time="2025-07-12T00:26:18.318763677Z" level=info msg="CreateContainer within sandbox \"58241c71b6a13d6b67bd08db584f33abb101916a8e6fc064b73a4416ee64ac97\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 12 00:26:18.343174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount227352012.mount: Deactivated successfully. Jul 12 00:26:18.348433 env[1913]: time="2025-07-12T00:26:18.348339288Z" level=info msg="CreateContainer within sandbox \"58241c71b6a13d6b67bd08db584f33abb101916a8e6fc064b73a4416ee64ac97\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"309d51b1f5cbc6956d2d91bdaa285023c65df9a7768ec5998403970879242881\"" Jul 12 00:26:18.350766 env[1913]: time="2025-07-12T00:26:18.349383278Z" level=info msg="StartContainer for \"309d51b1f5cbc6956d2d91bdaa285023c65df9a7768ec5998403970879242881\"" Jul 12 00:26:18.384907 kubelet[2983]: E0712 00:26:18.384822 2983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g7wxf" podUID="355545c7-e2b3-4e21-bab3-2e3ea1245fce" Jul 12 00:26:18.478105 env[1913]: time="2025-07-12T00:26:18.478043488Z" level=info msg="StartContainer for \"309d51b1f5cbc6956d2d91bdaa285023c65df9a7768ec5998403970879242881\" returns successfully" Jul 12 00:26:18.632971 kubelet[2983]: I0712 00:26:18.632853 2983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-76b978477b-njz55" podStartSLOduration=2.834188166 podStartE2EDuration="5.632830476s" podCreationTimestamp="2025-07-12 00:26:13 +0000 UTC" firstStartedPulling="2025-07-12 00:26:14.206898041 +0000 UTC m=+31.180099037" lastFinishedPulling="2025-07-12 00:26:17.005540351 +0000 UTC m=+33.978741347" observedRunningTime="2025-07-12 00:26:17.61391843 +0000 UTC m=+34.587119426" watchObservedRunningTime="2025-07-12 00:26:18.632830476 +0000 UTC m=+35.606031496" Jul 12 00:26:18.705000 audit[3665]: NETFILTER_CFG table=filter:103 family=2 entries=21 op=nft_register_rule pid=3665 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:18.710719 kernel: kauditd_printk_skb: 20 callbacks suppressed Jul 12 00:26:18.710870 kernel: audit: type=1325 audit(1752279978.705:300): table=filter:103 family=2 entries=21 op=nft_register_rule pid=3665 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:18.705000 audit[3665]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd05acd90 a2=0 a3=1 items=0 ppid=3133 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:18.727781 kernel: audit: type=1300 audit(1752279978.705:300): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd05acd90 a2=0 a3=1 items=0 ppid=3133 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:18.705000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:18.733676 kernel: audit: type=1327 audit(1752279978.705:300): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:18.726000 audit[3665]: NETFILTER_CFG table=nat:104 family=2 entries=19 op=nft_register_chain pid=3665 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:18.740096 kernel: audit: type=1325 audit(1752279978.726:301): table=nat:104 family=2 entries=19 op=nft_register_chain pid=3665 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:18.726000 audit[3665]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffd05acd90 a2=0 a3=1 items=0 ppid=3133 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:18.753786 kernel: audit: type=1300 audit(1752279978.726:301): arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffd05acd90 a2=0 a3=1 items=0 ppid=3133 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:18.726000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:18.759674 kernel: audit: type=1327 audit(1752279978.726:301): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:18.934850 env[1913]: time="2025-07-12T00:26:18.934668415Z" level=info msg="shim disconnected" id=309d51b1f5cbc6956d2d91bdaa285023c65df9a7768ec5998403970879242881 Jul 12 00:26:18.934850 env[1913]: time="2025-07-12T00:26:18.934741448Z" level=warning msg="cleaning up after shim disconnected" id=309d51b1f5cbc6956d2d91bdaa285023c65df9a7768ec5998403970879242881 namespace=k8s.io Jul 12 00:26:18.934850 env[1913]: time="2025-07-12T00:26:18.934766492Z" level=info msg="cleaning up dead shim" Jul 12 00:26:18.952293 env[1913]: time="2025-07-12T00:26:18.952207492Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:26:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3666 runtime=io.containerd.runc.v2\n" Jul 12 00:26:19.027378 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-309d51b1f5cbc6956d2d91bdaa285023c65df9a7768ec5998403970879242881-rootfs.mount: Deactivated successfully. Jul 12 00:26:19.600463 env[1913]: time="2025-07-12T00:26:19.598751070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 12 00:26:20.385294 kubelet[2983]: E0712 00:26:20.384603 2983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g7wxf" podUID="355545c7-e2b3-4e21-bab3-2e3ea1245fce" Jul 12 00:26:22.384268 kubelet[2983]: E0712 00:26:22.384108 2983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g7wxf" podUID="355545c7-e2b3-4e21-bab3-2e3ea1245fce" Jul 12 00:26:23.372019 env[1913]: time="2025-07-12T00:26:23.371939718Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:23.374608 env[1913]: time="2025-07-12T00:26:23.374527524Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:23.377123 env[1913]: time="2025-07-12T00:26:23.377061774Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:23.379802 env[1913]: time="2025-07-12T00:26:23.379741104Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:23.381712 env[1913]: time="2025-07-12T00:26:23.381662765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 12 00:26:23.401780 env[1913]: time="2025-07-12T00:26:23.401677612Z" level=info msg="CreateContainer within sandbox \"58241c71b6a13d6b67bd08db584f33abb101916a8e6fc064b73a4416ee64ac97\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 12 00:26:23.427310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3339138129.mount: Deactivated successfully. Jul 12 00:26:23.437446 env[1913]: time="2025-07-12T00:26:23.437383253Z" level=info msg="CreateContainer within sandbox \"58241c71b6a13d6b67bd08db584f33abb101916a8e6fc064b73a4416ee64ac97\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9a0f7ea9c87afb2781caf3abad89ef072d2ae45cc84de12db98232a9a8b782b8\"" Jul 12 00:26:23.440253 env[1913]: time="2025-07-12T00:26:23.438898196Z" level=info msg="StartContainer for \"9a0f7ea9c87afb2781caf3abad89ef072d2ae45cc84de12db98232a9a8b782b8\"" Jul 12 00:26:23.580022 env[1913]: time="2025-07-12T00:26:23.579928867Z" level=info msg="StartContainer for \"9a0f7ea9c87afb2781caf3abad89ef072d2ae45cc84de12db98232a9a8b782b8\" returns successfully" Jul 12 00:26:24.384677 kubelet[2983]: E0712 00:26:24.384562 2983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g7wxf" podUID="355545c7-e2b3-4e21-bab3-2e3ea1245fce" Jul 12 00:26:24.662955 env[1913]: time="2025-07-12T00:26:24.662749605Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:26:24.682592 kubelet[2983]: I0712 00:26:24.682535 2983 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 12 00:26:24.792120 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a0f7ea9c87afb2781caf3abad89ef072d2ae45cc84de12db98232a9a8b782b8-rootfs.mount: Deactivated successfully. Jul 12 00:26:24.886659 kubelet[2983]: I0712 00:26:24.886299 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8h67\" (UniqueName: \"kubernetes.io/projected/08c64d92-9452-47f2-8a8c-8837e4813c7d-kube-api-access-t8h67\") pod \"coredns-7c65d6cfc9-6g88r\" (UID: \"08c64d92-9452-47f2-8a8c-8837e4813c7d\") " pod="kube-system/coredns-7c65d6cfc9-6g88r" Jul 12 00:26:24.886659 kubelet[2983]: I0712 00:26:24.886420 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw2w7\" (UniqueName: \"kubernetes.io/projected/213fb0de-6a80-4aa5-aeb1-a0af932ccfc6-kube-api-access-nw2w7\") pod \"goldmane-58fd7646b9-p759q\" (UID: \"213fb0de-6a80-4aa5-aeb1-a0af932ccfc6\") " pod="calico-system/goldmane-58fd7646b9-p759q" Jul 12 00:26:24.886659 kubelet[2983]: I0712 00:26:24.886471 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgtdz\" (UniqueName: \"kubernetes.io/projected/c1b2666c-1d6f-4ba9-9d83-e51550e0fc3d-kube-api-access-rgtdz\") pod \"calico-apiserver-8494455ff7-bxk4f\" (UID: \"c1b2666c-1d6f-4ba9-9d83-e51550e0fc3d\") " pod="calico-apiserver/calico-apiserver-8494455ff7-bxk4f" Jul 12 00:26:24.886659 kubelet[2983]: I0712 00:26:24.886537 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wdg4\" (UniqueName: \"kubernetes.io/projected/7051a960-7ce8-45f1-8249-f71049b41599-kube-api-access-8wdg4\") pod \"calico-apiserver-8494455ff7-gwch8\" (UID: \"7051a960-7ce8-45f1-8249-f71049b41599\") " pod="calico-apiserver/calico-apiserver-8494455ff7-gwch8" Jul 12 00:26:24.886659 kubelet[2983]: I0712 00:26:24.886607 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c1b2666c-1d6f-4ba9-9d83-e51550e0fc3d-calico-apiserver-certs\") pod \"calico-apiserver-8494455ff7-bxk4f\" (UID: \"c1b2666c-1d6f-4ba9-9d83-e51550e0fc3d\") " pod="calico-apiserver/calico-apiserver-8494455ff7-bxk4f" Jul 12 00:26:24.887266 kubelet[2983]: I0712 00:26:24.886652 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08c64d92-9452-47f2-8a8c-8837e4813c7d-config-volume\") pod \"coredns-7c65d6cfc9-6g88r\" (UID: \"08c64d92-9452-47f2-8a8c-8837e4813c7d\") " pod="kube-system/coredns-7c65d6cfc9-6g88r" Jul 12 00:26:24.887266 kubelet[2983]: I0712 00:26:24.886724 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvbtl\" (UniqueName: \"kubernetes.io/projected/0398604d-88a0-41c4-996f-ea9a3a6c7de4-kube-api-access-kvbtl\") pod \"calico-kube-controllers-b9c4d9bf9-swqxk\" (UID: \"0398604d-88a0-41c4-996f-ea9a3a6c7de4\") " pod="calico-system/calico-kube-controllers-b9c4d9bf9-swqxk" Jul 12 00:26:24.887266 kubelet[2983]: I0712 00:26:24.886798 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b25db915-a031-4109-9564-cc0834ce0083-config-volume\") pod \"coredns-7c65d6cfc9-msgjt\" (UID: \"b25db915-a031-4109-9564-cc0834ce0083\") " pod="kube-system/coredns-7c65d6cfc9-msgjt" Jul 12 00:26:24.887266 kubelet[2983]: I0712 00:26:24.886862 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7051a960-7ce8-45f1-8249-f71049b41599-calico-apiserver-certs\") pod \"calico-apiserver-8494455ff7-gwch8\" (UID: \"7051a960-7ce8-45f1-8249-f71049b41599\") " pod="calico-apiserver/calico-apiserver-8494455ff7-gwch8" Jul 12 00:26:24.887266 kubelet[2983]: I0712 00:26:24.886904 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f0a0acc-0bab-4743-a700-1a9f1e2d87c0-whisker-ca-bundle\") pod \"whisker-5f777c54c6-9qb2c\" (UID: \"3f0a0acc-0bab-4743-a700-1a9f1e2d87c0\") " pod="calico-system/whisker-5f777c54c6-9qb2c" Jul 12 00:26:24.887644 kubelet[2983]: I0712 00:26:24.886977 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0398604d-88a0-41c4-996f-ea9a3a6c7de4-tigera-ca-bundle\") pod \"calico-kube-controllers-b9c4d9bf9-swqxk\" (UID: \"0398604d-88a0-41c4-996f-ea9a3a6c7de4\") " pod="calico-system/calico-kube-controllers-b9c4d9bf9-swqxk" Jul 12 00:26:24.887644 kubelet[2983]: I0712 00:26:24.887046 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gks8p\" (UniqueName: \"kubernetes.io/projected/3f0a0acc-0bab-4743-a700-1a9f1e2d87c0-kube-api-access-gks8p\") pod \"whisker-5f777c54c6-9qb2c\" (UID: \"3f0a0acc-0bab-4743-a700-1a9f1e2d87c0\") " pod="calico-system/whisker-5f777c54c6-9qb2c" Jul 12 00:26:24.887644 kubelet[2983]: I0712 00:26:24.887100 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/213fb0de-6a80-4aa5-aeb1-a0af932ccfc6-config\") pod \"goldmane-58fd7646b9-p759q\" (UID: \"213fb0de-6a80-4aa5-aeb1-a0af932ccfc6\") " pod="calico-system/goldmane-58fd7646b9-p759q" Jul 12 00:26:24.887644 kubelet[2983]: I0712 00:26:24.887170 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqfn7\" (UniqueName: \"kubernetes.io/projected/b25db915-a031-4109-9564-cc0834ce0083-kube-api-access-cqfn7\") pod \"coredns-7c65d6cfc9-msgjt\" (UID: \"b25db915-a031-4109-9564-cc0834ce0083\") " pod="kube-system/coredns-7c65d6cfc9-msgjt" Jul 12 00:26:24.887644 kubelet[2983]: I0712 00:26:24.887269 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/213fb0de-6a80-4aa5-aeb1-a0af932ccfc6-goldmane-key-pair\") pod \"goldmane-58fd7646b9-p759q\" (UID: \"213fb0de-6a80-4aa5-aeb1-a0af932ccfc6\") " pod="calico-system/goldmane-58fd7646b9-p759q" Jul 12 00:26:24.888016 kubelet[2983]: I0712 00:26:24.887341 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3f0a0acc-0bab-4743-a700-1a9f1e2d87c0-whisker-backend-key-pair\") pod \"whisker-5f777c54c6-9qb2c\" (UID: \"3f0a0acc-0bab-4743-a700-1a9f1e2d87c0\") " pod="calico-system/whisker-5f777c54c6-9qb2c" Jul 12 00:26:24.888016 kubelet[2983]: I0712 00:26:24.887402 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/213fb0de-6a80-4aa5-aeb1-a0af932ccfc6-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-p759q\" (UID: \"213fb0de-6a80-4aa5-aeb1-a0af932ccfc6\") " pod="calico-system/goldmane-58fd7646b9-p759q" Jul 12 00:26:25.117904 env[1913]: time="2025-07-12T00:26:25.117727573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6g88r,Uid:08c64d92-9452-47f2-8a8c-8837e4813c7d,Namespace:kube-system,Attempt:0,}" Jul 12 00:26:25.129286 env[1913]: time="2025-07-12T00:26:25.129045387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b9c4d9bf9-swqxk,Uid:0398604d-88a0-41c4-996f-ea9a3a6c7de4,Namespace:calico-system,Attempt:0,}" Jul 12 00:26:25.146695 env[1913]: time="2025-07-12T00:26:25.146610956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f777c54c6-9qb2c,Uid:3f0a0acc-0bab-4743-a700-1a9f1e2d87c0,Namespace:calico-system,Attempt:0,}" Jul 12 00:26:25.161528 env[1913]: time="2025-07-12T00:26:25.161462682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8494455ff7-gwch8,Uid:7051a960-7ce8-45f1-8249-f71049b41599,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:26:25.202206 env[1913]: time="2025-07-12T00:26:25.202138468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-p759q,Uid:213fb0de-6a80-4aa5-aeb1-a0af932ccfc6,Namespace:calico-system,Attempt:0,}" Jul 12 00:26:25.289070 env[1913]: time="2025-07-12T00:26:25.288988958Z" level=info msg="shim disconnected" id=9a0f7ea9c87afb2781caf3abad89ef072d2ae45cc84de12db98232a9a8b782b8 Jul 12 00:26:25.289070 env[1913]: time="2025-07-12T00:26:25.289060730Z" level=warning msg="cleaning up after shim disconnected" id=9a0f7ea9c87afb2781caf3abad89ef072d2ae45cc84de12db98232a9a8b782b8 namespace=k8s.io Jul 12 00:26:25.289449 env[1913]: time="2025-07-12T00:26:25.289085198Z" level=info msg="cleaning up dead shim" Jul 12 00:26:25.307581 env[1913]: time="2025-07-12T00:26:25.307513749Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:26:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3745 runtime=io.containerd.runc.v2\n" Jul 12 00:26:25.360553 env[1913]: time="2025-07-12T00:26:25.360476604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-msgjt,Uid:b25db915-a031-4109-9564-cc0834ce0083,Namespace:kube-system,Attempt:0,}" Jul 12 00:26:25.408989 env[1913]: time="2025-07-12T00:26:25.408908956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8494455ff7-bxk4f,Uid:c1b2666c-1d6f-4ba9-9d83-e51550e0fc3d,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:26:25.647260 env[1913]: time="2025-07-12T00:26:25.644645099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 12 00:26:25.738953 env[1913]: time="2025-07-12T00:26:25.738505453Z" level=error msg="Failed to destroy network for sandbox \"503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:25.740499 env[1913]: time="2025-07-12T00:26:25.740411573Z" level=error msg="encountered an error cleaning up failed sandbox \"503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:25.740809 env[1913]: time="2025-07-12T00:26:25.740713530Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8494455ff7-gwch8,Uid:7051a960-7ce8-45f1-8249-f71049b41599,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:25.746066 kubelet[2983]: E0712 00:26:25.745995 2983 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:25.748924 kubelet[2983]: E0712 00:26:25.746095 2983 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8494455ff7-gwch8" Jul 12 00:26:25.748924 kubelet[2983]: E0712 00:26:25.746138 2983 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8494455ff7-gwch8" Jul 12 00:26:25.748924 kubelet[2983]: E0712 00:26:25.746217 2983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8494455ff7-gwch8_calico-apiserver(7051a960-7ce8-45f1-8249-f71049b41599)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8494455ff7-gwch8_calico-apiserver(7051a960-7ce8-45f1-8249-f71049b41599)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8494455ff7-gwch8" podUID="7051a960-7ce8-45f1-8249-f71049b41599" Jul 12 00:26:25.822005 env[1913]: time="2025-07-12T00:26:25.821930186Z" level=error msg="Failed to destroy network for sandbox \"607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:25.822856 env[1913]: time="2025-07-12T00:26:25.822794848Z" level=error msg="encountered an error cleaning up failed sandbox \"607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:25.823073 env[1913]: time="2025-07-12T00:26:25.823022309Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-p759q,Uid:213fb0de-6a80-4aa5-aeb1-a0af932ccfc6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:25.824284 kubelet[2983]: E0712 00:26:25.823567 2983 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:25.824284 kubelet[2983]: E0712 00:26:25.823664 2983 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-p759q" Jul 12 00:26:25.824284 kubelet[2983]: E0712 00:26:25.823738 2983 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-p759q" Jul 12 00:26:25.826381 kubelet[2983]: E0712 00:26:25.823849 2983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-p759q_calico-system(213fb0de-6a80-4aa5-aeb1-a0af932ccfc6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-p759q_calico-system(213fb0de-6a80-4aa5-aeb1-a0af932ccfc6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-p759q" podUID="213fb0de-6a80-4aa5-aeb1-a0af932ccfc6" Jul 12 00:26:25.867516 env[1913]: time="2025-07-12T00:26:25.867417372Z" level=error msg="Failed to destroy network for sandbox \"0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:25.868477 env[1913]: time="2025-07-12T00:26:25.868352546Z" level=error msg="encountered an error cleaning up failed sandbox \"0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:25.868775 env[1913]: time="2025-07-12T00:26:25.868698951Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f777c54c6-9qb2c,Uid:3f0a0acc-0bab-4743-a700-1a9f1e2d87c0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:25.871855 kubelet[2983]: E0712 00:26:25.869283 2983 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:25.871855 kubelet[2983]: E0712 00:26:25.869398 2983 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f777c54c6-9qb2c" Jul 12 00:26:25.871855 kubelet[2983]: E0712 00:26:25.869455 2983 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f777c54c6-9qb2c" Jul 12 00:26:25.872376 kubelet[2983]: E0712 00:26:25.869553 2983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5f777c54c6-9qb2c_calico-system(3f0a0acc-0bab-4743-a700-1a9f1e2d87c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5f777c54c6-9qb2c_calico-system(3f0a0acc-0bab-4743-a700-1a9f1e2d87c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f777c54c6-9qb2c" podUID="3f0a0acc-0bab-4743-a700-1a9f1e2d87c0" Jul 12 00:26:25.889367 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a-shm.mount: Deactivated successfully. Jul 12 00:26:25.889682 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6-shm.mount: Deactivated successfully. Jul 12 00:26:25.906562 env[1913]: time="2025-07-12T00:26:25.906463350Z" level=error msg="Failed to destroy network for sandbox \"a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:25.916203 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431-shm.mount: Deactivated successfully. Jul 12 00:26:25.918710 env[1913]: time="2025-07-12T00:26:25.918618443Z" level=error msg="encountered an error cleaning up failed sandbox \"a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:25.918869 env[1913]: time="2025-07-12T00:26:25.918726863Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b9c4d9bf9-swqxk,Uid:0398604d-88a0-41c4-996f-ea9a3a6c7de4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:25.919087 kubelet[2983]: E0712 00:26:25.919019 2983 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:25.919254 kubelet[2983]: E0712 00:26:25.919104 2983 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b9c4d9bf9-swqxk" Jul 12 00:26:25.919254 kubelet[2983]: E0712 00:26:25.919134 2983 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b9c4d9bf9-swqxk" Jul 12 00:26:25.919409 kubelet[2983]: E0712 00:26:25.919209 2983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b9c4d9bf9-swqxk_calico-system(0398604d-88a0-41c4-996f-ea9a3a6c7de4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b9c4d9bf9-swqxk_calico-system(0398604d-88a0-41c4-996f-ea9a3a6c7de4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b9c4d9bf9-swqxk" podUID="0398604d-88a0-41c4-996f-ea9a3a6c7de4" Jul 12 00:26:25.958562 env[1913]: time="2025-07-12T00:26:25.958465975Z" level=error msg="Failed to destroy network for sandbox \"df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:25.969158 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1-shm.mount: Deactivated successfully. Jul 12 00:26:25.971742 env[1913]: time="2025-07-12T00:26:25.971464177Z" level=error msg="encountered an error cleaning up failed sandbox \"df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:25.971742 env[1913]: time="2025-07-12T00:26:25.971586698Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6g88r,Uid:08c64d92-9452-47f2-8a8c-8837e4813c7d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:25.972643 kubelet[2983]: E0712 00:26:25.972276 2983 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:25.972643 kubelet[2983]: E0712 00:26:25.972389 2983 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6g88r" Jul 12 00:26:25.972643 kubelet[2983]: E0712 00:26:25.972473 2983 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6g88r" Jul 12 00:26:25.974807 kubelet[2983]: E0712 00:26:25.972566 2983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-6g88r_kube-system(08c64d92-9452-47f2-8a8c-8837e4813c7d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-6g88r_kube-system(08c64d92-9452-47f2-8a8c-8837e4813c7d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6g88r" podUID="08c64d92-9452-47f2-8a8c-8837e4813c7d" Jul 12 00:26:25.980907 env[1913]: time="2025-07-12T00:26:25.980820071Z" level=error msg="Failed to destroy network for sandbox \"70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:25.981826 env[1913]: time="2025-07-12T00:26:25.981762313Z" level=error msg="encountered an error cleaning up failed sandbox \"70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:25.982060 env[1913]: time="2025-07-12T00:26:25.982009706Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-msgjt,Uid:b25db915-a031-4109-9564-cc0834ce0083,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:25.985152 kubelet[2983]: E0712 00:26:25.982582 2983 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:25.985152 kubelet[2983]: E0712 00:26:25.982688 2983 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-msgjt" Jul 12 00:26:25.985152 kubelet[2983]: E0712 00:26:25.982749 2983 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-msgjt" Jul 12 00:26:25.985582 kubelet[2983]: E0712 00:26:25.982879 2983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-msgjt_kube-system(b25db915-a031-4109-9564-cc0834ce0083)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-msgjt_kube-system(b25db915-a031-4109-9564-cc0834ce0083)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-msgjt" podUID="b25db915-a031-4109-9564-cc0834ce0083" Jul 12 00:26:25.991183 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f-shm.mount: Deactivated successfully. Jul 12 00:26:26.003019 env[1913]: time="2025-07-12T00:26:26.002937202Z" level=error msg="Failed to destroy network for sandbox \"d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:26.003894 env[1913]: time="2025-07-12T00:26:26.003826380Z" level=error msg="encountered an error cleaning up failed sandbox \"d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:26.004066 env[1913]: time="2025-07-12T00:26:26.003927493Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8494455ff7-bxk4f,Uid:c1b2666c-1d6f-4ba9-9d83-e51550e0fc3d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:26.004332 kubelet[2983]: E0712 00:26:26.004253 2983 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:26.004467 kubelet[2983]: E0712 00:26:26.004347 2983 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8494455ff7-bxk4f" Jul 12 00:26:26.004467 kubelet[2983]: E0712 00:26:26.004385 2983 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8494455ff7-bxk4f" Jul 12 00:26:26.004597 kubelet[2983]: E0712 00:26:26.004459 2983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8494455ff7-bxk4f_calico-apiserver(c1b2666c-1d6f-4ba9-9d83-e51550e0fc3d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8494455ff7-bxk4f_calico-apiserver(c1b2666c-1d6f-4ba9-9d83-e51550e0fc3d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8494455ff7-bxk4f" podUID="c1b2666c-1d6f-4ba9-9d83-e51550e0fc3d" Jul 12 00:26:26.391350 env[1913]: time="2025-07-12T00:26:26.391289234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g7wxf,Uid:355545c7-e2b3-4e21-bab3-2e3ea1245fce,Namespace:calico-system,Attempt:0,}" Jul 12 00:26:26.498178 env[1913]: time="2025-07-12T00:26:26.498095132Z" level=error msg="Failed to destroy network for sandbox \"f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:26.503955 env[1913]: time="2025-07-12T00:26:26.503877405Z" level=error msg="encountered an error cleaning up failed sandbox \"f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:26.504341 env[1913]: time="2025-07-12T00:26:26.504264226Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g7wxf,Uid:355545c7-e2b3-4e21-bab3-2e3ea1245fce,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:26.504978 kubelet[2983]: E0712 00:26:26.504934 2983 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:26.505248 kubelet[2983]: E0712 00:26:26.505190 2983 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g7wxf" Jul 12 00:26:26.505419 kubelet[2983]: E0712 00:26:26.505382 2983 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g7wxf" Jul 12 00:26:26.505649 kubelet[2983]: E0712 00:26:26.505573 2983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-g7wxf_calico-system(355545c7-e2b3-4e21-bab3-2e3ea1245fce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-g7wxf_calico-system(355545c7-e2b3-4e21-bab3-2e3ea1245fce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g7wxf" podUID="355545c7-e2b3-4e21-bab3-2e3ea1245fce" Jul 12 00:26:26.624155 kubelet[2983]: I0712 00:26:26.624102 2983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" Jul 12 00:26:26.626420 env[1913]: time="2025-07-12T00:26:26.625400652Z" level=info msg="StopPodSandbox for \"a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431\"" Jul 12 00:26:26.629074 kubelet[2983]: I0712 00:26:26.629010 2983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" Jul 12 00:26:26.631249 env[1913]: time="2025-07-12T00:26:26.631165969Z" level=info msg="StopPodSandbox for \"df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1\"" Jul 12 00:26:26.639905 kubelet[2983]: I0712 00:26:26.639848 2983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" Jul 12 00:26:26.644190 env[1913]: time="2025-07-12T00:26:26.643280993Z" level=info msg="StopPodSandbox for \"70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f\"" Jul 12 00:26:26.647657 kubelet[2983]: I0712 00:26:26.647608 2983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" Jul 12 00:26:26.652174 env[1913]: time="2025-07-12T00:26:26.652104805Z" level=info msg="StopPodSandbox for \"0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6\"" Jul 12 00:26:26.657834 kubelet[2983]: I0712 00:26:26.657332 2983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" Jul 12 00:26:26.662610 env[1913]: time="2025-07-12T00:26:26.662549701Z" level=info msg="StopPodSandbox for \"f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04\"" Jul 12 00:26:26.664900 kubelet[2983]: I0712 00:26:26.664764 2983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" Jul 12 00:26:26.671613 env[1913]: time="2025-07-12T00:26:26.670690352Z" level=info msg="StopPodSandbox for \"607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a\"" Jul 12 00:26:26.674151 kubelet[2983]: I0712 00:26:26.674102 2983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" Jul 12 00:26:26.676848 env[1913]: time="2025-07-12T00:26:26.675978476Z" level=info msg="StopPodSandbox for \"d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8\"" Jul 12 00:26:26.684849 kubelet[2983]: I0712 00:26:26.684786 2983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" Jul 12 00:26:26.687148 env[1913]: time="2025-07-12T00:26:26.686149532Z" level=info msg="StopPodSandbox for \"503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3\"" Jul 12 00:26:26.772804 env[1913]: time="2025-07-12T00:26:26.772709294Z" level=error msg="StopPodSandbox for \"df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1\" failed" error="failed to destroy network for sandbox \"df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:26.773961 kubelet[2983]: E0712 00:26:26.773661 2983 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" Jul 12 00:26:26.773961 kubelet[2983]: E0712 00:26:26.773745 2983 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1"} Jul 12 00:26:26.773961 kubelet[2983]: E0712 00:26:26.773828 2983 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"08c64d92-9452-47f2-8a8c-8837e4813c7d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:26:26.773961 kubelet[2983]: E0712 00:26:26.773869 2983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"08c64d92-9452-47f2-8a8c-8837e4813c7d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6g88r" podUID="08c64d92-9452-47f2-8a8c-8837e4813c7d" Jul 12 00:26:26.799785 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8-shm.mount: Deactivated successfully. Jul 12 00:26:26.843769 env[1913]: time="2025-07-12T00:26:26.843683014Z" level=error msg="StopPodSandbox for \"a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431\" failed" error="failed to destroy network for sandbox \"a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:26.844565 kubelet[2983]: E0712 00:26:26.844255 2983 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" Jul 12 00:26:26.844565 kubelet[2983]: E0712 00:26:26.844348 2983 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431"} Jul 12 00:26:26.844565 kubelet[2983]: E0712 00:26:26.844434 2983 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0398604d-88a0-41c4-996f-ea9a3a6c7de4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:26:26.844565 kubelet[2983]: E0712 00:26:26.844477 2983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0398604d-88a0-41c4-996f-ea9a3a6c7de4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b9c4d9bf9-swqxk" podUID="0398604d-88a0-41c4-996f-ea9a3a6c7de4" Jul 12 00:26:26.866419 env[1913]: time="2025-07-12T00:26:26.866330738Z" level=error msg="StopPodSandbox for \"0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6\" failed" error="failed to destroy network for sandbox \"0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:26.867277 kubelet[2983]: E0712 00:26:26.866857 2983 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" Jul 12 00:26:26.867277 kubelet[2983]: E0712 00:26:26.866987 2983 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6"} Jul 12 00:26:26.867277 kubelet[2983]: E0712 00:26:26.867091 2983 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3f0a0acc-0bab-4743-a700-1a9f1e2d87c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:26:26.867277 kubelet[2983]: E0712 00:26:26.867159 2983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3f0a0acc-0bab-4743-a700-1a9f1e2d87c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f777c54c6-9qb2c" podUID="3f0a0acc-0bab-4743-a700-1a9f1e2d87c0" Jul 12 00:26:26.896453 env[1913]: time="2025-07-12T00:26:26.894998659Z" level=error msg="StopPodSandbox for \"d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8\" failed" error="failed to destroy network for sandbox \"d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:26.897503 kubelet[2983]: E0712 00:26:26.897112 2983 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" Jul 12 00:26:26.897503 kubelet[2983]: E0712 00:26:26.897207 2983 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8"} Jul 12 00:26:26.897503 kubelet[2983]: E0712 00:26:26.897299 2983 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c1b2666c-1d6f-4ba9-9d83-e51550e0fc3d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:26:26.897503 kubelet[2983]: E0712 00:26:26.897385 2983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c1b2666c-1d6f-4ba9-9d83-e51550e0fc3d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8494455ff7-bxk4f" podUID="c1b2666c-1d6f-4ba9-9d83-e51550e0fc3d" Jul 12 00:26:26.910590 env[1913]: time="2025-07-12T00:26:26.910483843Z" level=error msg="StopPodSandbox for \"70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f\" failed" error="failed to destroy network for sandbox \"70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:26.911380 kubelet[2983]: E0712 00:26:26.911029 2983 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" Jul 12 00:26:26.911380 kubelet[2983]: E0712 00:26:26.911169 2983 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f"} Jul 12 00:26:26.911380 kubelet[2983]: E0712 00:26:26.911263 2983 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b25db915-a031-4109-9564-cc0834ce0083\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:26:26.911380 kubelet[2983]: E0712 00:26:26.911307 2983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b25db915-a031-4109-9564-cc0834ce0083\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-msgjt" podUID="b25db915-a031-4109-9564-cc0834ce0083" Jul 12 00:26:26.959767 env[1913]: time="2025-07-12T00:26:26.959684604Z" level=error msg="StopPodSandbox for \"607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a\" failed" error="failed to destroy network for sandbox \"607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:26.960599 kubelet[2983]: E0712 00:26:26.960268 2983 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" Jul 12 00:26:26.960599 kubelet[2983]: E0712 00:26:26.960387 2983 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a"} Jul 12 00:26:26.960599 kubelet[2983]: E0712 00:26:26.960465 2983 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"213fb0de-6a80-4aa5-aeb1-a0af932ccfc6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:26:26.960599 kubelet[2983]: E0712 00:26:26.960532 2983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"213fb0de-6a80-4aa5-aeb1-a0af932ccfc6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-p759q" podUID="213fb0de-6a80-4aa5-aeb1-a0af932ccfc6" Jul 12 00:26:26.963857 env[1913]: time="2025-07-12T00:26:26.963783681Z" level=error msg="StopPodSandbox for \"f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04\" failed" error="failed to destroy network for sandbox \"f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:26.964763 kubelet[2983]: E0712 00:26:26.964415 2983 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" Jul 12 00:26:26.964763 kubelet[2983]: E0712 00:26:26.964527 2983 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04"} Jul 12 00:26:26.964763 kubelet[2983]: E0712 00:26:26.964609 2983 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"355545c7-e2b3-4e21-bab3-2e3ea1245fce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:26:26.964763 kubelet[2983]: E0712 00:26:26.964685 2983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"355545c7-e2b3-4e21-bab3-2e3ea1245fce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g7wxf" podUID="355545c7-e2b3-4e21-bab3-2e3ea1245fce" Jul 12 00:26:26.978317 env[1913]: time="2025-07-12T00:26:26.978211987Z" level=error msg="StopPodSandbox for \"503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3\" failed" error="failed to destroy network for sandbox \"503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:26:26.979184 kubelet[2983]: E0712 00:26:26.978856 2983 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" Jul 12 00:26:26.979184 kubelet[2983]: E0712 00:26:26.978985 2983 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3"} Jul 12 00:26:26.979184 kubelet[2983]: E0712 00:26:26.979061 2983 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7051a960-7ce8-45f1-8249-f71049b41599\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:26:26.979184 kubelet[2983]: E0712 00:26:26.979110 2983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7051a960-7ce8-45f1-8249-f71049b41599\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8494455ff7-gwch8" podUID="7051a960-7ce8-45f1-8249-f71049b41599" Jul 12 00:26:34.085446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2696379366.mount: Deactivated successfully. Jul 12 00:26:34.197661 env[1913]: time="2025-07-12T00:26:34.197573113Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:34.203607 env[1913]: time="2025-07-12T00:26:34.203536480Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:34.208485 env[1913]: time="2025-07-12T00:26:34.208411410Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:34.213501 env[1913]: time="2025-07-12T00:26:34.213438683Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:34.216002 env[1913]: time="2025-07-12T00:26:34.214878379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 12 00:26:34.265640 env[1913]: time="2025-07-12T00:26:34.265583938Z" level=info msg="CreateContainer within sandbox \"58241c71b6a13d6b67bd08db584f33abb101916a8e6fc064b73a4416ee64ac97\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 12 00:26:34.313034 env[1913]: time="2025-07-12T00:26:34.312947434Z" level=info msg="CreateContainer within sandbox \"58241c71b6a13d6b67bd08db584f33abb101916a8e6fc064b73a4416ee64ac97\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"42b9d6dc0035ca32ad37cd639945c7b3f5acda86cdae0a63724ad380d59dcb45\"" Jul 12 00:26:34.314126 env[1913]: time="2025-07-12T00:26:34.314070731Z" level=info msg="StartContainer for \"42b9d6dc0035ca32ad37cd639945c7b3f5acda86cdae0a63724ad380d59dcb45\"" Jul 12 00:26:34.478718 env[1913]: time="2025-07-12T00:26:34.478637472Z" level=info msg="StartContainer for \"42b9d6dc0035ca32ad37cd639945c7b3f5acda86cdae0a63724ad380d59dcb45\" returns successfully" Jul 12 00:26:34.758138 kubelet[2983]: I0712 00:26:34.757624 2983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4w679" podStartSLOduration=1.223635878 podStartE2EDuration="20.757598304s" podCreationTimestamp="2025-07-12 00:26:14 +0000 UTC" firstStartedPulling="2025-07-12 00:26:14.683336468 +0000 UTC m=+31.656537476" lastFinishedPulling="2025-07-12 00:26:34.217298906 +0000 UTC m=+51.190499902" observedRunningTime="2025-07-12 00:26:34.755665228 +0000 UTC m=+51.728866260" watchObservedRunningTime="2025-07-12 00:26:34.757598304 +0000 UTC m=+51.730799300" Jul 12 00:26:34.930978 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 12 00:26:34.931168 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 12 00:26:35.156721 env[1913]: time="2025-07-12T00:26:35.156666214Z" level=info msg="StopPodSandbox for \"0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6\"" Jul 12 00:26:35.527408 env[1913]: 2025-07-12 00:26:35.379 [INFO][4167] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" Jul 12 00:26:35.527408 env[1913]: 2025-07-12 00:26:35.380 [INFO][4167] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" iface="eth0" netns="/var/run/netns/cni-3151a693-5aeb-96c0-ba4c-0410afa1f3a8" Jul 12 00:26:35.527408 env[1913]: 2025-07-12 00:26:35.380 [INFO][4167] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" iface="eth0" netns="/var/run/netns/cni-3151a693-5aeb-96c0-ba4c-0410afa1f3a8" Jul 12 00:26:35.527408 env[1913]: 2025-07-12 00:26:35.382 [INFO][4167] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" iface="eth0" netns="/var/run/netns/cni-3151a693-5aeb-96c0-ba4c-0410afa1f3a8" Jul 12 00:26:35.527408 env[1913]: 2025-07-12 00:26:35.382 [INFO][4167] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" Jul 12 00:26:35.527408 env[1913]: 2025-07-12 00:26:35.382 [INFO][4167] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" Jul 12 00:26:35.527408 env[1913]: 2025-07-12 00:26:35.489 [INFO][4178] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" HandleID="k8s-pod-network.0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" Workload="ip--172--31--29--120-k8s-whisker--5f777c54c6--9qb2c-eth0" Jul 12 00:26:35.527408 env[1913]: 2025-07-12 00:26:35.490 [INFO][4178] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:26:35.527408 env[1913]: 2025-07-12 00:26:35.490 [INFO][4178] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:26:35.527408 env[1913]: 2025-07-12 00:26:35.513 [WARNING][4178] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" HandleID="k8s-pod-network.0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" Workload="ip--172--31--29--120-k8s-whisker--5f777c54c6--9qb2c-eth0" Jul 12 00:26:35.527408 env[1913]: 2025-07-12 00:26:35.513 [INFO][4178] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" HandleID="k8s-pod-network.0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" Workload="ip--172--31--29--120-k8s-whisker--5f777c54c6--9qb2c-eth0" Jul 12 00:26:35.527408 env[1913]: 2025-07-12 00:26:35.518 [INFO][4178] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:26:35.527408 env[1913]: 2025-07-12 00:26:35.524 [INFO][4167] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" Jul 12 00:26:35.533194 systemd[1]: run-netns-cni\x2d3151a693\x2d5aeb\x2d96c0\x2dba4c\x2d0410afa1f3a8.mount: Deactivated successfully. Jul 12 00:26:35.535526 env[1913]: time="2025-07-12T00:26:35.535268545Z" level=info msg="TearDown network for sandbox \"0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6\" successfully" Jul 12 00:26:35.535526 env[1913]: time="2025-07-12T00:26:35.535334234Z" level=info msg="StopPodSandbox for \"0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6\" returns successfully" Jul 12 00:26:35.588403 kubelet[2983]: I0712 00:26:35.587631 2983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3f0a0acc-0bab-4743-a700-1a9f1e2d87c0-whisker-backend-key-pair\") pod \"3f0a0acc-0bab-4743-a700-1a9f1e2d87c0\" (UID: \"3f0a0acc-0bab-4743-a700-1a9f1e2d87c0\") " Jul 12 00:26:35.588403 kubelet[2983]: I0712 00:26:35.588304 2983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gks8p\" (UniqueName: \"kubernetes.io/projected/3f0a0acc-0bab-4743-a700-1a9f1e2d87c0-kube-api-access-gks8p\") pod \"3f0a0acc-0bab-4743-a700-1a9f1e2d87c0\" (UID: \"3f0a0acc-0bab-4743-a700-1a9f1e2d87c0\") " Jul 12 00:26:35.588403 kubelet[2983]: I0712 00:26:35.588351 2983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f0a0acc-0bab-4743-a700-1a9f1e2d87c0-whisker-ca-bundle\") pod \"3f0a0acc-0bab-4743-a700-1a9f1e2d87c0\" (UID: \"3f0a0acc-0bab-4743-a700-1a9f1e2d87c0\") " Jul 12 00:26:35.589492 kubelet[2983]: I0712 00:26:35.589445 2983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f0a0acc-0bab-4743-a700-1a9f1e2d87c0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3f0a0acc-0bab-4743-a700-1a9f1e2d87c0" (UID: "3f0a0acc-0bab-4743-a700-1a9f1e2d87c0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 12 00:26:35.597313 systemd[1]: var-lib-kubelet-pods-3f0a0acc\x2d0bab\x2d4743\x2da700\x2d1a9f1e2d87c0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 12 00:26:35.619263 kubelet[2983]: I0712 00:26:35.602374 2983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f0a0acc-0bab-4743-a700-1a9f1e2d87c0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3f0a0acc-0bab-4743-a700-1a9f1e2d87c0" (UID: "3f0a0acc-0bab-4743-a700-1a9f1e2d87c0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 12 00:26:35.614976 systemd[1]: var-lib-kubelet-pods-3f0a0acc\x2d0bab\x2d4743\x2da700\x2d1a9f1e2d87c0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgks8p.mount: Deactivated successfully. Jul 12 00:26:35.619610 kubelet[2983]: I0712 00:26:35.619477 2983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f0a0acc-0bab-4743-a700-1a9f1e2d87c0-kube-api-access-gks8p" (OuterVolumeSpecName: "kube-api-access-gks8p") pod "3f0a0acc-0bab-4743-a700-1a9f1e2d87c0" (UID: "3f0a0acc-0bab-4743-a700-1a9f1e2d87c0"). InnerVolumeSpecName "kube-api-access-gks8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:26:35.689137 kubelet[2983]: I0712 00:26:35.689087 2983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gks8p\" (UniqueName: \"kubernetes.io/projected/3f0a0acc-0bab-4743-a700-1a9f1e2d87c0-kube-api-access-gks8p\") on node \"ip-172-31-29-120\" DevicePath \"\"" Jul 12 00:26:35.689440 kubelet[2983]: I0712 00:26:35.689410 2983 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f0a0acc-0bab-4743-a700-1a9f1e2d87c0-whisker-ca-bundle\") on node \"ip-172-31-29-120\" DevicePath \"\"" Jul 12 00:26:35.689589 kubelet[2983]: I0712 00:26:35.689562 2983 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3f0a0acc-0bab-4743-a700-1a9f1e2d87c0-whisker-backend-key-pair\") on node \"ip-172-31-29-120\" DevicePath \"\"" Jul 12 00:26:35.892998 kubelet[2983]: I0712 00:26:35.892923 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qds5t\" (UniqueName: \"kubernetes.io/projected/465a642d-221e-4ec3-b85b-6defef73cb8a-kube-api-access-qds5t\") pod \"whisker-7947b8c6c6-2mhn7\" (UID: \"465a642d-221e-4ec3-b85b-6defef73cb8a\") " pod="calico-system/whisker-7947b8c6c6-2mhn7" Jul 12 00:26:35.893832 kubelet[2983]: I0712 00:26:35.893794 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/465a642d-221e-4ec3-b85b-6defef73cb8a-whisker-backend-key-pair\") pod \"whisker-7947b8c6c6-2mhn7\" (UID: \"465a642d-221e-4ec3-b85b-6defef73cb8a\") " pod="calico-system/whisker-7947b8c6c6-2mhn7" Jul 12 00:26:35.894017 kubelet[2983]: I0712 00:26:35.893990 2983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/465a642d-221e-4ec3-b85b-6defef73cb8a-whisker-ca-bundle\") pod \"whisker-7947b8c6c6-2mhn7\" (UID: \"465a642d-221e-4ec3-b85b-6defef73cb8a\") " pod="calico-system/whisker-7947b8c6c6-2mhn7" Jul 12 00:26:36.178029 env[1913]: time="2025-07-12T00:26:36.177890380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7947b8c6c6-2mhn7,Uid:465a642d-221e-4ec3-b85b-6defef73cb8a,Namespace:calico-system,Attempt:0,}" Jul 12 00:26:36.476531 (udev-worker)[4150]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:26:36.479363 systemd-networkd[1586]: cali348c4eadb9a: Link UP Jul 12 00:26:36.486972 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 12 00:26:36.487089 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali348c4eadb9a: link becomes ready Jul 12 00:26:36.487388 systemd-networkd[1586]: cali348c4eadb9a: Gained carrier Jul 12 00:26:36.522654 env[1913]: 2025-07-12 00:26:36.273 [INFO][4219] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 12 00:26:36.522654 env[1913]: 2025-07-12 00:26:36.298 [INFO][4219] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--120-k8s-whisker--7947b8c6c6--2mhn7-eth0 whisker-7947b8c6c6- calico-system 465a642d-221e-4ec3-b85b-6defef73cb8a 913 0 2025-07-12 00:26:35 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7947b8c6c6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-29-120 whisker-7947b8c6c6-2mhn7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali348c4eadb9a [] [] }} ContainerID="a62887f4ece0f5eddffbe23cddd5512151b7320ba36a356f9be735564ddc8fbc" Namespace="calico-system" Pod="whisker-7947b8c6c6-2mhn7" WorkloadEndpoint="ip--172--31--29--120-k8s-whisker--7947b8c6c6--2mhn7-" Jul 12 00:26:36.522654 env[1913]: 2025-07-12 00:26:36.298 [INFO][4219] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a62887f4ece0f5eddffbe23cddd5512151b7320ba36a356f9be735564ddc8fbc" Namespace="calico-system" Pod="whisker-7947b8c6c6-2mhn7" WorkloadEndpoint="ip--172--31--29--120-k8s-whisker--7947b8c6c6--2mhn7-eth0" Jul 12 00:26:36.522654 env[1913]: 2025-07-12 00:26:36.379 [INFO][4231] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a62887f4ece0f5eddffbe23cddd5512151b7320ba36a356f9be735564ddc8fbc" HandleID="k8s-pod-network.a62887f4ece0f5eddffbe23cddd5512151b7320ba36a356f9be735564ddc8fbc" Workload="ip--172--31--29--120-k8s-whisker--7947b8c6c6--2mhn7-eth0" Jul 12 00:26:36.522654 env[1913]: 2025-07-12 00:26:36.379 [INFO][4231] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a62887f4ece0f5eddffbe23cddd5512151b7320ba36a356f9be735564ddc8fbc" HandleID="k8s-pod-network.a62887f4ece0f5eddffbe23cddd5512151b7320ba36a356f9be735564ddc8fbc" Workload="ip--172--31--29--120-k8s-whisker--7947b8c6c6--2mhn7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb600), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-120", "pod":"whisker-7947b8c6c6-2mhn7", "timestamp":"2025-07-12 00:26:36.379189797 +0000 UTC"}, Hostname:"ip-172-31-29-120", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:26:36.522654 env[1913]: 2025-07-12 00:26:36.379 [INFO][4231] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:26:36.522654 env[1913]: 2025-07-12 00:26:36.380 [INFO][4231] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:26:36.522654 env[1913]: 2025-07-12 00:26:36.380 [INFO][4231] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-120' Jul 12 00:26:36.522654 env[1913]: 2025-07-12 00:26:36.394 [INFO][4231] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a62887f4ece0f5eddffbe23cddd5512151b7320ba36a356f9be735564ddc8fbc" host="ip-172-31-29-120" Jul 12 00:26:36.522654 env[1913]: 2025-07-12 00:26:36.405 [INFO][4231] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-120" Jul 12 00:26:36.522654 env[1913]: 2025-07-12 00:26:36.414 [INFO][4231] ipam/ipam.go 511: Trying affinity for 192.168.107.192/26 host="ip-172-31-29-120" Jul 12 00:26:36.522654 env[1913]: 2025-07-12 00:26:36.418 [INFO][4231] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.192/26 host="ip-172-31-29-120" Jul 12 00:26:36.522654 env[1913]: 2025-07-12 00:26:36.422 [INFO][4231] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.192/26 host="ip-172-31-29-120" Jul 12 00:26:36.522654 env[1913]: 2025-07-12 00:26:36.422 [INFO][4231] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.192/26 handle="k8s-pod-network.a62887f4ece0f5eddffbe23cddd5512151b7320ba36a356f9be735564ddc8fbc" host="ip-172-31-29-120" Jul 12 00:26:36.522654 env[1913]: 2025-07-12 00:26:36.425 [INFO][4231] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a62887f4ece0f5eddffbe23cddd5512151b7320ba36a356f9be735564ddc8fbc Jul 12 00:26:36.522654 env[1913]: 2025-07-12 00:26:36.438 [INFO][4231] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.192/26 handle="k8s-pod-network.a62887f4ece0f5eddffbe23cddd5512151b7320ba36a356f9be735564ddc8fbc" host="ip-172-31-29-120" Jul 12 00:26:36.522654 env[1913]: 2025-07-12 00:26:36.453 [INFO][4231] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.193/26] block=192.168.107.192/26 handle="k8s-pod-network.a62887f4ece0f5eddffbe23cddd5512151b7320ba36a356f9be735564ddc8fbc" host="ip-172-31-29-120" Jul 12 00:26:36.522654 env[1913]: 2025-07-12 00:26:36.453 [INFO][4231] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.193/26] handle="k8s-pod-network.a62887f4ece0f5eddffbe23cddd5512151b7320ba36a356f9be735564ddc8fbc" host="ip-172-31-29-120" Jul 12 00:26:36.522654 env[1913]: 2025-07-12 00:26:36.453 [INFO][4231] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:26:36.522654 env[1913]: 2025-07-12 00:26:36.453 [INFO][4231] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.193/26] IPv6=[] ContainerID="a62887f4ece0f5eddffbe23cddd5512151b7320ba36a356f9be735564ddc8fbc" HandleID="k8s-pod-network.a62887f4ece0f5eddffbe23cddd5512151b7320ba36a356f9be735564ddc8fbc" Workload="ip--172--31--29--120-k8s-whisker--7947b8c6c6--2mhn7-eth0" Jul 12 00:26:36.524054 env[1913]: 2025-07-12 00:26:36.460 [INFO][4219] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a62887f4ece0f5eddffbe23cddd5512151b7320ba36a356f9be735564ddc8fbc" Namespace="calico-system" Pod="whisker-7947b8c6c6-2mhn7" WorkloadEndpoint="ip--172--31--29--120-k8s-whisker--7947b8c6c6--2mhn7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--120-k8s-whisker--7947b8c6c6--2mhn7-eth0", GenerateName:"whisker-7947b8c6c6-", Namespace:"calico-system", SelfLink:"", UID:"465a642d-221e-4ec3-b85b-6defef73cb8a", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7947b8c6c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-120", ContainerID:"", Pod:"whisker-7947b8c6c6-2mhn7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.107.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali348c4eadb9a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:26:36.524054 env[1913]: 2025-07-12 00:26:36.460 [INFO][4219] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.193/32] ContainerID="a62887f4ece0f5eddffbe23cddd5512151b7320ba36a356f9be735564ddc8fbc" Namespace="calico-system" Pod="whisker-7947b8c6c6-2mhn7" WorkloadEndpoint="ip--172--31--29--120-k8s-whisker--7947b8c6c6--2mhn7-eth0" Jul 12 00:26:36.524054 env[1913]: 2025-07-12 00:26:36.460 [INFO][4219] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali348c4eadb9a ContainerID="a62887f4ece0f5eddffbe23cddd5512151b7320ba36a356f9be735564ddc8fbc" Namespace="calico-system" Pod="whisker-7947b8c6c6-2mhn7" WorkloadEndpoint="ip--172--31--29--120-k8s-whisker--7947b8c6c6--2mhn7-eth0" Jul 12 00:26:36.524054 env[1913]: 2025-07-12 00:26:36.490 [INFO][4219] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a62887f4ece0f5eddffbe23cddd5512151b7320ba36a356f9be735564ddc8fbc" Namespace="calico-system" Pod="whisker-7947b8c6c6-2mhn7" WorkloadEndpoint="ip--172--31--29--120-k8s-whisker--7947b8c6c6--2mhn7-eth0" Jul 12 00:26:36.524054 env[1913]: 2025-07-12 00:26:36.491 [INFO][4219] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a62887f4ece0f5eddffbe23cddd5512151b7320ba36a356f9be735564ddc8fbc" Namespace="calico-system" Pod="whisker-7947b8c6c6-2mhn7" WorkloadEndpoint="ip--172--31--29--120-k8s-whisker--7947b8c6c6--2mhn7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--120-k8s-whisker--7947b8c6c6--2mhn7-eth0", GenerateName:"whisker-7947b8c6c6-", Namespace:"calico-system", SelfLink:"", UID:"465a642d-221e-4ec3-b85b-6defef73cb8a", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7947b8c6c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-120", ContainerID:"a62887f4ece0f5eddffbe23cddd5512151b7320ba36a356f9be735564ddc8fbc", Pod:"whisker-7947b8c6c6-2mhn7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.107.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali348c4eadb9a", MAC:"ce:76:5b:51:e2:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:26:36.524054 env[1913]: 2025-07-12 00:26:36.518 [INFO][4219] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a62887f4ece0f5eddffbe23cddd5512151b7320ba36a356f9be735564ddc8fbc" Namespace="calico-system" Pod="whisker-7947b8c6c6-2mhn7" WorkloadEndpoint="ip--172--31--29--120-k8s-whisker--7947b8c6c6--2mhn7-eth0" Jul 12 00:26:36.542856 env[1913]: time="2025-07-12T00:26:36.542702000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:26:36.543583 env[1913]: time="2025-07-12T00:26:36.542788486Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:26:36.543808 env[1913]: time="2025-07-12T00:26:36.543714474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:26:36.544564 env[1913]: time="2025-07-12T00:26:36.544401380Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a62887f4ece0f5eddffbe23cddd5512151b7320ba36a356f9be735564ddc8fbc pid=4252 runtime=io.containerd.runc.v2 Jul 12 00:26:36.673361 env[1913]: time="2025-07-12T00:26:36.673304830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7947b8c6c6-2mhn7,Uid:465a642d-221e-4ec3-b85b-6defef73cb8a,Namespace:calico-system,Attempt:0,} returns sandbox id \"a62887f4ece0f5eddffbe23cddd5512151b7320ba36a356f9be735564ddc8fbc\"" Jul 12 00:26:36.678776 env[1913]: time="2025-07-12T00:26:36.676623525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 12 00:26:36.845000 audit[4322]: AVC avc: denied { write } for pid=4322 comm="tee" name="fd" dev="proc" ino=21413 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 12 00:26:36.859272 kernel: audit: type=1400 audit(1752279996.845:302): avc: denied { write } for pid=4322 comm="tee" name="fd" dev="proc" ino=21413 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 12 00:26:36.859000 audit[4324]: AVC avc: denied { write } for pid=4324 comm="tee" name="fd" dev="proc" ino=21416 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 12 00:26:36.859000 audit[4324]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffce1507e3 a2=241 a3=1b6 items=1 ppid=4304 pid=4324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:36.886317 kernel: audit: type=1400 audit(1752279996.859:303): avc: denied { write } for pid=4324 comm="tee" name="fd" dev="proc" ino=21416 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 12 00:26:36.886486 kernel: audit: type=1300 audit(1752279996.859:303): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffce1507e3 a2=241 a3=1b6 items=1 ppid=4304 pid=4324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:36.859000 audit: CWD cwd="/etc/service/enabled/confd/log" Jul 12 00:26:36.890161 kernel: audit: type=1307 audit(1752279996.859:303): cwd="/etc/service/enabled/confd/log" Jul 12 00:26:36.859000 audit: PATH item=0 name="/dev/fd/63" inode=21387 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:26:36.898823 kernel: audit: type=1302 audit(1752279996.859:303): item=0 name="/dev/fd/63" inode=21387 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:26:36.859000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 12 00:26:36.906982 kernel: audit: type=1327 audit(1752279996.859:303): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 12 00:26:36.860000 audit[4332]: AVC avc: denied { write } for pid=4332 comm="tee" name="fd" dev="proc" ino=22203 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 12 00:26:36.860000 audit[4332]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffffd53c7e3 a2=241 a3=1b6 items=1 ppid=4306 pid=4332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:36.927771 kernel: audit: type=1400 audit(1752279996.860:304): avc: denied { write } for pid=4332 comm="tee" name="fd" dev="proc" ino=22203 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 12 00:26:36.927945 kernel: audit: type=1300 audit(1752279996.860:304): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffffd53c7e3 a2=241 a3=1b6 items=1 ppid=4306 pid=4332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:36.860000 audit: CWD cwd="/etc/service/enabled/felix/log" Jul 12 00:26:36.860000 audit: PATH item=0 name="/dev/fd/63" inode=21402 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:26:36.941715 kernel: audit: type=1307 audit(1752279996.860:304): cwd="/etc/service/enabled/felix/log" Jul 12 00:26:36.941915 kernel: audit: type=1302 audit(1752279996.860:304): item=0 name="/dev/fd/63" inode=21402 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:26:36.860000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 12 00:26:36.845000 audit[4322]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcf3637e5 a2=241 a3=1b6 items=1 ppid=4296 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:36.845000 audit: CWD cwd="/etc/service/enabled/cni/log" Jul 12 00:26:36.845000 audit: PATH item=0 name="/dev/fd/63" inode=22182 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:26:36.845000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 12 00:26:36.885000 audit[4345]: AVC avc: denied { write } for pid=4345 comm="tee" name="fd" dev="proc" ino=21422 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 12 00:26:36.885000 audit[4345]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff06fb7e4 a2=241 a3=1b6 items=1 ppid=4300 pid=4345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:36.885000 audit: CWD cwd="/etc/service/enabled/bird/log" Jul 12 00:26:36.885000 audit: PATH item=0 name="/dev/fd/63" inode=21412 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:26:36.885000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 12 00:26:36.907000 audit[4342]: AVC avc: denied { write } for pid=4342 comm="tee" name="fd" dev="proc" ino=22218 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 12 00:26:36.907000 audit[4342]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe0c5c7d3 a2=241 a3=1b6 items=1 ppid=4298 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:36.907000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jul 12 00:26:36.907000 audit: PATH item=0 name="/dev/fd/63" inode=21411 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:26:36.907000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 12 00:26:36.954000 audit[4356]: AVC avc: denied { write } for pid=4356 comm="tee" name="fd" dev="proc" ino=21427 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 12 00:26:36.954000 audit[4356]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc03637d4 a2=241 a3=1b6 items=1 ppid=4301 pid=4356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:36.954000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jul 12 00:26:36.954000 audit: PATH item=0 name="/dev/fd/63" inode=22222 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:26:36.954000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 12 00:26:37.028000 audit[4367]: AVC avc: denied { write } for pid=4367 comm="tee" name="fd" dev="proc" ino=21434 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 12 00:26:37.028000 audit[4367]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffdeb957e3 a2=241 a3=1b6 items=1 ppid=4311 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:37.028000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jul 12 00:26:37.028000 audit: PATH item=0 name="/dev/fd/63" inode=21424 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:26:37.028000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 12 00:26:37.392806 kubelet[2983]: I0712 00:26:37.392692 2983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f0a0acc-0bab-4743-a700-1a9f1e2d87c0" path="/var/lib/kubelet/pods/3f0a0acc-0bab-4743-a700-1a9f1e2d87c0/volumes" Jul 12 00:26:37.574000 audit[4387]: AVC avc: denied { bpf } for pid=4387 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.574000 audit[4387]: AVC avc: denied { bpf } for pid=4387 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.574000 audit[4387]: AVC avc: denied { perfmon } for pid=4387 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.574000 audit[4387]: AVC avc: denied { perfmon } for pid=4387 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.574000 audit[4387]: AVC avc: denied { perfmon } for pid=4387 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.574000 audit[4387]: AVC avc: denied { perfmon } for pid=4387 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.574000 audit[4387]: AVC avc: denied { perfmon } for pid=4387 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.574000 audit[4387]: AVC avc: denied { bpf } for pid=4387 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.574000 audit[4387]: AVC avc: denied { bpf } for pid=4387 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.574000 audit: BPF prog-id=10 op=LOAD Jul 12 00:26:37.574000 audit[4387]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcfac11f8 a2=98 a3=ffffcfac11e8 items=0 ppid=4315 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:37.574000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 12 00:26:37.575000 audit: BPF prog-id=10 op=UNLOAD Jul 12 00:26:37.575000 audit[4387]: AVC avc: denied { bpf } for pid=4387 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.575000 audit[4387]: AVC avc: denied { bpf } for pid=4387 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.575000 audit[4387]: AVC avc: denied { perfmon } for pid=4387 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.575000 audit[4387]: AVC avc: denied { perfmon } for pid=4387 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.575000 audit[4387]: AVC avc: denied { perfmon } for pid=4387 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.575000 audit[4387]: AVC avc: denied { perfmon } for pid=4387 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.575000 audit[4387]: AVC avc: denied { perfmon } for pid=4387 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.575000 audit[4387]: AVC avc: denied { bpf } for pid=4387 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.575000 audit[4387]: AVC avc: denied { bpf } for pid=4387 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.575000 audit: BPF prog-id=11 op=LOAD Jul 12 00:26:37.575000 audit[4387]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcfac10a8 a2=74 a3=95 items=0 ppid=4315 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:37.575000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 12 00:26:37.575000 audit: BPF prog-id=11 op=UNLOAD Jul 12 00:26:37.575000 audit[4387]: AVC avc: denied { bpf } for pid=4387 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.575000 audit[4387]: AVC avc: denied { bpf } for pid=4387 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.575000 audit[4387]: AVC avc: denied { perfmon } for pid=4387 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.575000 audit[4387]: AVC avc: denied { perfmon } for pid=4387 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.575000 audit[4387]: AVC avc: denied { perfmon } for pid=4387 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.575000 audit[4387]: AVC avc: denied { perfmon } for pid=4387 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.575000 audit[4387]: AVC avc: denied { perfmon } for pid=4387 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.575000 audit[4387]: AVC avc: denied { bpf } for pid=4387 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.575000 audit[4387]: AVC avc: denied { bpf } for pid=4387 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.575000 audit: BPF prog-id=12 op=LOAD Jul 12 00:26:37.575000 audit[4387]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcfac10d8 a2=40 a3=ffffcfac1108 items=0 ppid=4315 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:37.575000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 12 00:26:37.575000 audit: BPF prog-id=12 op=UNLOAD Jul 12 00:26:37.575000 audit[4387]: AVC avc: denied { perfmon } for pid=4387 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.575000 audit[4387]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=0 a1=ffffcfac11f0 a2=50 a3=0 items=0 ppid=4315 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:37.575000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 12 00:26:37.592000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.592000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.592000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.592000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.592000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.592000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.592000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.592000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.592000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.592000 audit: BPF prog-id=13 op=LOAD Jul 12 00:26:37.592000 audit[4388]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff4fb8b28 a2=98 a3=fffff4fb8b18 items=0 ppid=4315 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:37.592000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:26:37.594000 audit: BPF prog-id=13 op=UNLOAD Jul 12 00:26:37.595000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.595000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.595000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.595000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.595000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.595000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.595000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.595000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.595000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.595000 audit: BPF prog-id=14 op=LOAD Jul 12 00:26:37.595000 audit[4388]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff4fb87b8 a2=74 a3=95 items=0 ppid=4315 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:37.595000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:26:37.596000 audit: BPF prog-id=14 op=UNLOAD Jul 12 00:26:37.596000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.596000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.596000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.596000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.596000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.596000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.596000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.596000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.596000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.596000 audit: BPF prog-id=15 op=LOAD Jul 12 00:26:37.596000 audit[4388]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff4fb8818 a2=94 a3=2 items=0 ppid=4315 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:37.596000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:26:37.596000 audit: BPF prog-id=15 op=UNLOAD Jul 12 00:26:37.877937 systemd-networkd[1586]: cali348c4eadb9a: Gained IPv6LL Jul 12 00:26:37.888000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.888000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.888000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.888000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.888000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.888000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.888000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.888000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.888000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.888000 audit: BPF prog-id=16 op=LOAD Jul 12 00:26:37.888000 audit[4388]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff4fb87d8 a2=40 a3=fffff4fb8808 items=0 ppid=4315 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:37.888000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:26:37.888000 audit: BPF prog-id=16 op=UNLOAD Jul 12 00:26:37.888000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.888000 audit[4388]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=fffff4fb88f0 a2=50 a3=0 items=0 ppid=4315 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:37.888000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:26:37.909000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.909000 audit[4388]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff4fb8848 a2=28 a3=fffff4fb8978 items=0 ppid=4315 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:37.909000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:26:37.909000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.909000 audit[4388]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff4fb8878 a2=28 a3=fffff4fb89a8 items=0 ppid=4315 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:37.909000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:26:37.909000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.909000 audit[4388]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff4fb8728 a2=28 a3=fffff4fb8858 items=0 ppid=4315 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:37.909000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:26:37.909000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.909000 audit[4388]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff4fb8898 a2=28 a3=fffff4fb89c8 items=0 ppid=4315 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:37.909000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:26:37.909000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.909000 audit[4388]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff4fb8878 a2=28 a3=fffff4fb89a8 items=0 ppid=4315 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:37.909000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:26:37.910000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.910000 audit[4388]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff4fb8868 a2=28 a3=fffff4fb8998 items=0 ppid=4315 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:37.910000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:26:37.910000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.910000 audit[4388]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff4fb8898 a2=28 a3=fffff4fb89c8 items=0 ppid=4315 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:37.910000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:26:37.910000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.910000 audit[4388]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff4fb8878 a2=28 a3=fffff4fb89a8 items=0 ppid=4315 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:37.910000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:26:37.910000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.910000 audit[4388]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff4fb8898 a2=28 a3=fffff4fb89c8 items=0 ppid=4315 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:37.910000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:26:37.910000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.910000 audit[4388]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff4fb8868 a2=28 a3=fffff4fb8998 items=0 ppid=4315 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:37.910000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:26:37.910000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.910000 audit[4388]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff4fb88e8 a2=28 a3=fffff4fb8a28 items=0 ppid=4315 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:37.910000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:26:37.910000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.910000 audit[4388]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=fffff4fb8620 a2=50 a3=0 items=0 ppid=4315 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:37.910000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:26:37.910000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.910000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.910000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.910000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.910000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.910000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.910000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.910000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.910000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.910000 audit: BPF prog-id=17 op=LOAD Jul 12 00:26:37.910000 audit[4388]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff4fb8628 a2=94 a3=5 items=0 ppid=4315 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:37.910000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:26:37.910000 audit: BPF prog-id=17 op=UNLOAD Jul 12 00:26:37.910000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.910000 audit[4388]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=fffff4fb8730 a2=50 a3=0 items=0 ppid=4315 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:37.910000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:26:37.910000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.910000 audit[4388]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=fffff4fb8878 a2=4 a3=3 items=0 ppid=4315 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:37.910000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:26:37.910000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.910000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.910000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.910000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.910000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.910000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.910000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.910000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.910000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.910000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.910000 audit[4388]: AVC avc: denied { confidentiality } for pid=4388 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 12 00:26:37.910000 audit[4388]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffff4fb8858 a2=94 a3=6 items=0 ppid=4315 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:37.910000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:26:37.911000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.911000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.911000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.911000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.911000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.911000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.911000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.911000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.911000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.911000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.911000 audit[4388]: AVC avc: denied { confidentiality } for pid=4388 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 12 00:26:37.911000 audit[4388]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffff4fb8028 a2=94 a3=83 items=0 ppid=4315 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:37.911000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:26:37.911000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.911000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.911000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.911000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.911000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.911000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.911000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.911000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.911000 audit[4388]: AVC avc: denied { perfmon } for pid=4388 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.911000 audit[4388]: AVC avc: denied { bpf } for pid=4388 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.911000 audit[4388]: AVC avc: denied { confidentiality } for pid=4388 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 12 00:26:37.911000 audit[4388]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffff4fb8028 a2=94 a3=83 items=0 ppid=4315 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:37.911000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:26:37.937000 audit[4413]: AVC avc: denied { bpf } for pid=4413 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.937000 audit[4413]: AVC avc: denied { bpf } for pid=4413 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.937000 audit[4413]: AVC avc: denied { perfmon } for pid=4413 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.937000 audit[4413]: AVC avc: denied { perfmon } for pid=4413 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.937000 audit[4413]: AVC avc: denied { perfmon } for pid=4413 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.937000 audit[4413]: AVC avc: denied { perfmon } for pid=4413 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.937000 audit[4413]: AVC avc: denied { perfmon } for pid=4413 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.937000 audit[4413]: AVC avc: denied { bpf } for pid=4413 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.937000 audit[4413]: AVC avc: denied { bpf } for pid=4413 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.937000 audit: BPF prog-id=18 op=LOAD Jul 12 00:26:37.937000 audit[4413]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff2a332f8 a2=98 a3=fffff2a332e8 items=0 ppid=4315 pid=4413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:37.937000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 12 00:26:37.939000 audit: BPF prog-id=18 op=UNLOAD Jul 12 00:26:37.939000 audit[4413]: AVC avc: denied { bpf } for pid=4413 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.939000 audit[4413]: AVC avc: denied { bpf } for pid=4413 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.939000 audit[4413]: AVC avc: denied { perfmon } for pid=4413 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.939000 audit[4413]: AVC avc: denied { perfmon } for pid=4413 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.939000 audit[4413]: AVC avc: denied { perfmon } for pid=4413 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.939000 audit[4413]: AVC avc: denied { perfmon } for pid=4413 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.939000 audit[4413]: AVC avc: denied { perfmon } for pid=4413 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.939000 audit[4413]: AVC avc: denied { bpf } for pid=4413 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.939000 audit[4413]: AVC avc: denied { bpf } for pid=4413 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.939000 audit: BPF prog-id=19 op=LOAD Jul 12 00:26:37.939000 audit[4413]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff2a331a8 a2=74 a3=95 items=0 ppid=4315 pid=4413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:37.939000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 12 00:26:37.941000 audit: BPF prog-id=19 op=UNLOAD Jul 12 00:26:37.941000 audit[4413]: AVC avc: denied { bpf } for pid=4413 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.941000 audit[4413]: AVC avc: denied { bpf } for pid=4413 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.941000 audit[4413]: AVC avc: denied { perfmon } for pid=4413 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.941000 audit[4413]: AVC avc: denied { perfmon } for pid=4413 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.941000 audit[4413]: AVC avc: denied { perfmon } for pid=4413 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.941000 audit[4413]: AVC avc: denied { perfmon } for pid=4413 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.941000 audit[4413]: AVC avc: denied { perfmon } for pid=4413 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.941000 audit[4413]: AVC avc: denied { bpf } for pid=4413 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.941000 audit[4413]: AVC avc: denied { bpf } for pid=4413 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:37.941000 audit: BPF prog-id=20 op=LOAD Jul 12 00:26:37.941000 audit[4413]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff2a331d8 a2=40 a3=fffff2a33208 items=0 ppid=4315 pid=4413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:37.941000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 12 00:26:37.942000 audit: BPF prog-id=20 op=UNLOAD Jul 12 00:26:38.085538 systemd-networkd[1586]: vxlan.calico: Link UP Jul 12 00:26:38.085551 systemd-networkd[1586]: vxlan.calico: Gained carrier Jul 12 00:26:38.106087 env[1913]: time="2025-07-12T00:26:38.106028409Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:38.115650 env[1913]: time="2025-07-12T00:26:38.115530467Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:38.123254 env[1913]: time="2025-07-12T00:26:38.123145546Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:38.127779 env[1913]: time="2025-07-12T00:26:38.127699243Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:38.128459 env[1913]: time="2025-07-12T00:26:38.128383833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 12 00:26:38.137333 env[1913]: time="2025-07-12T00:26:38.137163860Z" level=info msg="CreateContainer within sandbox \"a62887f4ece0f5eddffbe23cddd5512151b7320ba36a356f9be735564ddc8fbc\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 12 00:26:38.157296 (udev-worker)[4151]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:26:38.160000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.160000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.160000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.160000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.160000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.160000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.160000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.160000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.160000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.160000 audit: BPF prog-id=21 op=LOAD Jul 12 00:26:38.160000 audit[4439]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffce02218 a2=98 a3=fffffce02208 items=0 ppid=4315 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.160000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:26:38.160000 audit: BPF prog-id=21 op=UNLOAD Jul 12 00:26:38.160000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.160000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.160000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.160000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.160000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.160000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.160000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.160000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.160000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.160000 audit: BPF prog-id=22 op=LOAD Jul 12 00:26:38.160000 audit[4439]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffce01ef8 a2=74 a3=95 items=0 ppid=4315 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.160000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:26:38.161000 audit: BPF prog-id=22 op=UNLOAD Jul 12 00:26:38.161000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.161000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.161000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.161000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.161000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.161000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.161000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.161000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.161000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.161000 audit: BPF prog-id=23 op=LOAD Jul 12 00:26:38.161000 audit[4439]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffce01f58 a2=94 a3=2 items=0 ppid=4315 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.161000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:26:38.163000 audit: BPF prog-id=23 op=UNLOAD Jul 12 00:26:38.163000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.163000 audit[4439]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffffce01f88 a2=28 a3=fffffce020b8 items=0 ppid=4315 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.163000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:26:38.163000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.163000 audit[4439]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffce01fb8 a2=28 a3=fffffce020e8 items=0 ppid=4315 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.163000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:26:38.163000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.163000 audit[4439]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffce01e68 a2=28 a3=fffffce01f98 items=0 ppid=4315 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.163000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:26:38.163000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.163000 audit[4439]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffffce01fd8 a2=28 a3=fffffce02108 items=0 ppid=4315 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.163000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:26:38.163000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.163000 audit[4439]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffffce01fb8 a2=28 a3=fffffce020e8 items=0 ppid=4315 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.163000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:26:38.163000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.163000 audit[4439]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffffce01fa8 a2=28 a3=fffffce020d8 items=0 ppid=4315 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.163000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:26:38.163000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.163000 audit[4439]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffffce01fd8 a2=28 a3=fffffce02108 items=0 ppid=4315 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.163000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:26:38.163000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.163000 audit[4439]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffce01fb8 a2=28 a3=fffffce020e8 items=0 ppid=4315 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.163000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:26:38.178000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.178000 audit[4439]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffce01fd8 a2=28 a3=fffffce02108 items=0 ppid=4315 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.178000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:26:38.178000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.178000 audit[4439]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffce01fa8 a2=28 a3=fffffce020d8 items=0 ppid=4315 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.178000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:26:38.178000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.178000 audit[4439]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffffce02028 a2=28 a3=fffffce02168 items=0 ppid=4315 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.178000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:26:38.178000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.178000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.178000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.178000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.178000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.178000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.178000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.178000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.178000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.178000 audit: BPF prog-id=24 op=LOAD Jul 12 00:26:38.178000 audit[4439]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffce01e48 a2=40 a3=fffffce01e78 items=0 ppid=4315 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.178000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:26:38.178000 audit: BPF prog-id=24 op=UNLOAD Jul 12 00:26:38.183000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.183000 audit[4439]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=0 a1=fffffce01e70 a2=50 a3=0 items=0 ppid=4315 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.183000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:26:38.188000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.188000 audit[4439]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=0 a1=fffffce01e70 a2=50 a3=0 items=0 ppid=4315 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.188000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:26:38.188000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.188000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.188000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.188000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.188000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.188000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.188000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.188000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.188000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.188000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.188000 audit: BPF prog-id=25 op=LOAD Jul 12 00:26:38.188000 audit[4439]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffce015d8 a2=94 a3=2 items=0 ppid=4315 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.188000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:26:38.188000 audit: BPF prog-id=25 op=UNLOAD Jul 12 00:26:38.188000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.188000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.188000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.188000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.188000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.188000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.188000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.188000 audit[4439]: AVC avc: denied { perfmon } for pid=4439 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.188000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.188000 audit[4439]: AVC avc: denied { bpf } for pid=4439 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.188000 audit: BPF prog-id=26 op=LOAD Jul 12 00:26:38.188000 audit[4439]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffce01768 a2=94 a3=30 items=0 ppid=4315 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.188000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:26:38.213495 env[1913]: time="2025-07-12T00:26:38.178488142Z" level=info msg="CreateContainer within sandbox \"a62887f4ece0f5eddffbe23cddd5512151b7320ba36a356f9be735564ddc8fbc\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"f2c3ff1ceef062116acbd89a0abd472d840ee1ccdd8bd9de809a7571a9228803\"" Jul 12 00:26:38.213495 env[1913]: time="2025-07-12T00:26:38.179857118Z" level=info msg="StartContainer for \"f2c3ff1ceef062116acbd89a0abd472d840ee1ccdd8bd9de809a7571a9228803\"" Jul 12 00:26:38.213000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.213000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.213000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.213000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.213000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.213000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.213000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.213000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.213000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.213000 audit: BPF prog-id=27 op=LOAD Jul 12 00:26:38.213000 audit[4448]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcfd556c8 a2=98 a3=ffffcfd556b8 items=0 ppid=4315 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.213000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:26:38.215000 audit: BPF prog-id=27 op=UNLOAD Jul 12 00:26:38.216000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.216000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.216000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.216000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.216000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.216000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.216000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.216000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.216000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.216000 audit: BPF prog-id=28 op=LOAD Jul 12 00:26:38.216000 audit[4448]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcfd55358 a2=74 a3=95 items=0 ppid=4315 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.216000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:26:38.216000 audit: BPF prog-id=28 op=UNLOAD Jul 12 00:26:38.216000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.216000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.216000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.216000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.216000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.216000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.216000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.216000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.216000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.216000 audit: BPF prog-id=29 op=LOAD Jul 12 00:26:38.216000 audit[4448]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcfd553b8 a2=94 a3=2 items=0 ppid=4315 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.216000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:26:38.217000 audit: BPF prog-id=29 op=UNLOAD Jul 12 00:26:38.387522 env[1913]: time="2025-07-12T00:26:38.387373519Z" level=info msg="StopPodSandbox for \"df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1\"" Jul 12 00:26:38.388287 env[1913]: time="2025-07-12T00:26:38.388243453Z" level=info msg="StopPodSandbox for \"70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f\"" Jul 12 00:26:38.394381 env[1913]: time="2025-07-12T00:26:38.388826497Z" level=info msg="StopPodSandbox for \"f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04\"" Jul 12 00:26:38.409271 env[1913]: time="2025-07-12T00:26:38.408508014Z" level=info msg="StartContainer for \"f2c3ff1ceef062116acbd89a0abd472d840ee1ccdd8bd9de809a7571a9228803\" returns successfully" Jul 12 00:26:38.412590 env[1913]: time="2025-07-12T00:26:38.412526728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 12 00:26:38.506000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.506000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.506000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.506000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.506000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.506000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.506000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.506000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.506000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.506000 audit: BPF prog-id=30 op=LOAD Jul 12 00:26:38.506000 audit[4448]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcfd55378 a2=40 a3=ffffcfd553a8 items=0 ppid=4315 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.506000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:26:38.508000 audit: BPF prog-id=30 op=UNLOAD Jul 12 00:26:38.508000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.508000 audit[4448]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffcfd55490 a2=50 a3=0 items=0 ppid=4315 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.508000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:26:38.603000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.603000 audit[4448]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcfd553e8 a2=28 a3=ffffcfd55518 items=0 ppid=4315 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.603000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:26:38.603000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.603000 audit[4448]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcfd55418 a2=28 a3=ffffcfd55548 items=0 ppid=4315 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.603000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:26:38.603000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.603000 audit[4448]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcfd552c8 a2=28 a3=ffffcfd553f8 items=0 ppid=4315 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.603000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:26:38.603000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.603000 audit[4448]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcfd55438 a2=28 a3=ffffcfd55568 items=0 ppid=4315 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.603000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:26:38.603000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.603000 audit[4448]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcfd55418 a2=28 a3=ffffcfd55548 items=0 ppid=4315 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.603000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:26:38.603000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.603000 audit[4448]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcfd55408 a2=28 a3=ffffcfd55538 items=0 ppid=4315 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.603000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:26:38.603000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.603000 audit[4448]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcfd55438 a2=28 a3=ffffcfd55568 items=0 ppid=4315 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.603000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:26:38.604000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.604000 audit[4448]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcfd55418 a2=28 a3=ffffcfd55548 items=0 ppid=4315 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.604000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:26:38.604000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.604000 audit[4448]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcfd55438 a2=28 a3=ffffcfd55568 items=0 ppid=4315 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.604000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:26:38.604000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.604000 audit[4448]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcfd55408 a2=28 a3=ffffcfd55538 items=0 ppid=4315 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.604000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:26:38.604000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.604000 audit[4448]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcfd55488 a2=28 a3=ffffcfd555c8 items=0 ppid=4315 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.604000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:26:38.604000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.604000 audit[4448]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffcfd551c0 a2=50 a3=0 items=0 ppid=4315 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.604000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:26:38.604000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.604000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.604000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.604000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.604000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.604000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.604000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.604000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.604000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.604000 audit: BPF prog-id=31 op=LOAD Jul 12 00:26:38.604000 audit[4448]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcfd551c8 a2=94 a3=5 items=0 ppid=4315 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.604000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:26:38.604000 audit: BPF prog-id=31 op=UNLOAD Jul 12 00:26:38.604000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.604000 audit[4448]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffcfd552d0 a2=50 a3=0 items=0 ppid=4315 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.604000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:26:38.604000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.604000 audit[4448]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffcfd55418 a2=4 a3=3 items=0 ppid=4315 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.604000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:26:38.604000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.604000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.604000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.604000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.604000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.604000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.604000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.604000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.604000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.604000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.604000 audit[4448]: AVC avc: denied { confidentiality } for pid=4448 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 12 00:26:38.604000 audit[4448]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffcfd553f8 a2=94 a3=6 items=0 ppid=4315 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.604000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:26:38.607000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.607000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.607000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.607000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.607000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.607000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.607000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.607000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.607000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.607000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.607000 audit[4448]: AVC avc: denied { confidentiality } for pid=4448 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 12 00:26:38.607000 audit[4448]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffcfd54bc8 a2=94 a3=83 items=0 ppid=4315 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.607000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:26:38.607000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.607000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.607000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.607000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.607000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.607000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.607000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.607000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.607000 audit[4448]: AVC avc: denied { perfmon } for pid=4448 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.607000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.607000 audit[4448]: AVC avc: denied { confidentiality } for pid=4448 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 12 00:26:38.607000 audit[4448]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffcfd54bc8 a2=94 a3=83 items=0 ppid=4315 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.607000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:26:38.608000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.608000 audit[4448]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffcfd56608 a2=10 a3=ffffcfd566f8 items=0 ppid=4315 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.608000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:26:38.609000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.609000 audit[4448]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffcfd564c8 a2=10 a3=ffffcfd565b8 items=0 ppid=4315 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.609000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:26:38.609000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.609000 audit[4448]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffcfd56438 a2=10 a3=ffffcfd565b8 items=0 ppid=4315 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.609000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:26:38.609000 audit[4448]: AVC avc: denied { bpf } for pid=4448 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:26:38.609000 audit[4448]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffcfd56438 a2=10 a3=ffffcfd565b8 items=0 ppid=4315 pid=4448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:38.609000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:26:38.644000 audit: BPF prog-id=26 op=UNLOAD Jul 12 00:26:38.882876 env[1913]: 2025-07-12 00:26:38.700 [INFO][4518] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" Jul 12 00:26:38.882876 env[1913]: 2025-07-12 00:26:38.700 [INFO][4518] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" iface="eth0" netns="/var/run/netns/cni-b2c90df2-f40a-84ed-dc79-7b4e29954bbc" Jul 12 00:26:38.882876 env[1913]: 2025-07-12 00:26:38.701 [INFO][4518] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" iface="eth0" netns="/var/run/netns/cni-b2c90df2-f40a-84ed-dc79-7b4e29954bbc" Jul 12 00:26:38.882876 env[1913]: 2025-07-12 00:26:38.704 [INFO][4518] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" iface="eth0" netns="/var/run/netns/cni-b2c90df2-f40a-84ed-dc79-7b4e29954bbc" Jul 12 00:26:38.882876 env[1913]: 2025-07-12 00:26:38.704 [INFO][4518] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" Jul 12 00:26:38.882876 env[1913]: 2025-07-12 00:26:38.704 [INFO][4518] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" Jul 12 00:26:38.882876 env[1913]: 2025-07-12 00:26:38.810 [INFO][4542] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" HandleID="k8s-pod-network.70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" Workload="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--msgjt-eth0" Jul 12 00:26:38.882876 env[1913]: 2025-07-12 00:26:38.813 [INFO][4542] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:26:38.882876 env[1913]: 2025-07-12 00:26:38.814 [INFO][4542] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:26:38.882876 env[1913]: 2025-07-12 00:26:38.863 [WARNING][4542] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" HandleID="k8s-pod-network.70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" Workload="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--msgjt-eth0" Jul 12 00:26:38.882876 env[1913]: 2025-07-12 00:26:38.863 [INFO][4542] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" HandleID="k8s-pod-network.70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" Workload="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--msgjt-eth0" Jul 12 00:26:38.882876 env[1913]: 2025-07-12 00:26:38.870 [INFO][4542] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:26:38.882876 env[1913]: 2025-07-12 00:26:38.874 [INFO][4518] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" Jul 12 00:26:38.891924 systemd[1]: run-netns-cni\x2db2c90df2\x2df40a\x2d84ed\x2ddc79\x2d7b4e29954bbc.mount: Deactivated successfully. Jul 12 00:26:38.895409 env[1913]: time="2025-07-12T00:26:38.895338208Z" level=info msg="TearDown network for sandbox \"70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f\" successfully" Jul 12 00:26:38.895640 env[1913]: time="2025-07-12T00:26:38.895573473Z" level=info msg="StopPodSandbox for \"70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f\" returns successfully" Jul 12 00:26:38.902676 env[1913]: time="2025-07-12T00:26:38.902615792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-msgjt,Uid:b25db915-a031-4109-9564-cc0834ce0083,Namespace:kube-system,Attempt:1,}" Jul 12 00:26:38.959851 env[1913]: 2025-07-12 00:26:38.655 [INFO][4510] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" Jul 12 00:26:38.959851 env[1913]: 2025-07-12 00:26:38.655 [INFO][4510] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" iface="eth0" netns="/var/run/netns/cni-170538a4-4149-9f1c-ebc7-955baa69a18c" Jul 12 00:26:38.959851 env[1913]: 2025-07-12 00:26:38.656 [INFO][4510] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" iface="eth0" netns="/var/run/netns/cni-170538a4-4149-9f1c-ebc7-955baa69a18c" Jul 12 00:26:38.959851 env[1913]: 2025-07-12 00:26:38.658 [INFO][4510] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" iface="eth0" netns="/var/run/netns/cni-170538a4-4149-9f1c-ebc7-955baa69a18c" Jul 12 00:26:38.959851 env[1913]: 2025-07-12 00:26:38.658 [INFO][4510] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" Jul 12 00:26:38.959851 env[1913]: 2025-07-12 00:26:38.658 [INFO][4510] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" Jul 12 00:26:38.959851 env[1913]: 2025-07-12 00:26:38.845 [INFO][4533] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" HandleID="k8s-pod-network.f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" Workload="ip--172--31--29--120-k8s-csi--node--driver--g7wxf-eth0" Jul 12 00:26:38.959851 env[1913]: 2025-07-12 00:26:38.846 [INFO][4533] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:26:38.959851 env[1913]: 2025-07-12 00:26:38.906 [INFO][4533] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:26:38.959851 env[1913]: 2025-07-12 00:26:38.928 [WARNING][4533] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" HandleID="k8s-pod-network.f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" Workload="ip--172--31--29--120-k8s-csi--node--driver--g7wxf-eth0" Jul 12 00:26:38.959851 env[1913]: 2025-07-12 00:26:38.929 [INFO][4533] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" HandleID="k8s-pod-network.f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" Workload="ip--172--31--29--120-k8s-csi--node--driver--g7wxf-eth0" Jul 12 00:26:38.959851 env[1913]: 2025-07-12 00:26:38.933 [INFO][4533] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:26:38.959851 env[1913]: 2025-07-12 00:26:38.936 [INFO][4510] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" Jul 12 00:26:38.964031 env[1913]: time="2025-07-12T00:26:38.963950986Z" level=info msg="TearDown network for sandbox \"f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04\" successfully" Jul 12 00:26:38.964031 env[1913]: time="2025-07-12T00:26:38.964019844Z" level=info msg="StopPodSandbox for \"f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04\" returns successfully" Jul 12 00:26:38.965539 env[1913]: time="2025-07-12T00:26:38.965486261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g7wxf,Uid:355545c7-e2b3-4e21-bab3-2e3ea1245fce,Namespace:calico-system,Attempt:1,}" Jul 12 00:26:39.065592 env[1913]: 2025-07-12 00:26:38.698 [INFO][4502] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" Jul 12 00:26:39.065592 env[1913]: 2025-07-12 00:26:38.702 [INFO][4502] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" iface="eth0" netns="/var/run/netns/cni-824d3e7b-2f97-3486-04fb-110e8bace1dd" Jul 12 00:26:39.065592 env[1913]: 2025-07-12 00:26:38.703 [INFO][4502] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" iface="eth0" netns="/var/run/netns/cni-824d3e7b-2f97-3486-04fb-110e8bace1dd" Jul 12 00:26:39.065592 env[1913]: 2025-07-12 00:26:38.704 [INFO][4502] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" iface="eth0" netns="/var/run/netns/cni-824d3e7b-2f97-3486-04fb-110e8bace1dd" Jul 12 00:26:39.065592 env[1913]: 2025-07-12 00:26:38.704 [INFO][4502] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" Jul 12 00:26:39.065592 env[1913]: 2025-07-12 00:26:38.704 [INFO][4502] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" Jul 12 00:26:39.065592 env[1913]: 2025-07-12 00:26:38.931 [INFO][4543] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" HandleID="k8s-pod-network.df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" Workload="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--6g88r-eth0" Jul 12 00:26:39.065592 env[1913]: 2025-07-12 00:26:38.932 [INFO][4543] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:26:39.065592 env[1913]: 2025-07-12 00:26:38.954 [INFO][4543] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:26:39.065592 env[1913]: 2025-07-12 00:26:39.005 [WARNING][4543] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" HandleID="k8s-pod-network.df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" Workload="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--6g88r-eth0" Jul 12 00:26:39.065592 env[1913]: 2025-07-12 00:26:39.005 [INFO][4543] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" HandleID="k8s-pod-network.df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" Workload="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--6g88r-eth0" Jul 12 00:26:39.065592 env[1913]: 2025-07-12 00:26:39.013 [INFO][4543] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:26:39.065592 env[1913]: 2025-07-12 00:26:39.049 [INFO][4502] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" Jul 12 00:26:39.068502 env[1913]: time="2025-07-12T00:26:39.068429017Z" level=info msg="TearDown network for sandbox \"df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1\" successfully" Jul 12 00:26:39.068755 env[1913]: time="2025-07-12T00:26:39.068713639Z" level=info msg="StopPodSandbox for \"df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1\" returns successfully" Jul 12 00:26:39.070116 env[1913]: time="2025-07-12T00:26:39.070059358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6g88r,Uid:08c64d92-9452-47f2-8a8c-8837e4813c7d,Namespace:kube-system,Attempt:1,}" Jul 12 00:26:39.136000 audit[4594]: NETFILTER_CFG table=nat:105 family=2 entries=15 op=nft_register_chain pid=4594 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 12 00:26:39.136000 audit[4594]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffc57a5db0 a2=0 a3=ffff85440fa8 items=0 ppid=4315 pid=4594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:39.136000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 12 00:26:39.149000 audit[4596]: NETFILTER_CFG table=mangle:106 family=2 entries=16 op=nft_register_chain pid=4596 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 12 00:26:39.149000 audit[4596]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffdf13b3d0 a2=0 a3=ffffbd587fa8 items=0 ppid=4315 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:39.149000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 12 00:26:39.193879 systemd[1]: run-netns-cni\x2d170538a4\x2d4149\x2d9f1c\x2debc7\x2d955baa69a18c.mount: Deactivated successfully. Jul 12 00:26:39.194165 systemd[1]: run-netns-cni\x2d824d3e7b\x2d2f97\x2d3486\x2d04fb\x2d110e8bace1dd.mount: Deactivated successfully. Jul 12 00:26:39.214000 audit[4590]: NETFILTER_CFG table=raw:107 family=2 entries=21 op=nft_register_chain pid=4590 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 12 00:26:39.214000 audit[4590]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffdb235930 a2=0 a3=ffff85f73fa8 items=0 ppid=4315 pid=4590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:39.214000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 12 00:26:39.260000 audit[4610]: NETFILTER_CFG table=filter:108 family=2 entries=94 op=nft_register_chain pid=4610 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 12 00:26:39.260000 audit[4610]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffeaa98790 a2=0 a3=ffff839e8fa8 items=0 ppid=4315 pid=4610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:39.260000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 12 00:26:39.574932 (udev-worker)[4441]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:26:39.577788 systemd-networkd[1586]: calic47b68d46fc: Link UP Jul 12 00:26:39.584717 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 12 00:26:39.585499 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic47b68d46fc: link becomes ready Jul 12 00:26:39.585181 systemd-networkd[1586]: calic47b68d46fc: Gained carrier Jul 12 00:26:39.648294 env[1913]: 2025-07-12 00:26:39.223 [INFO][4568] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--120-k8s-coredns--7c65d6cfc9--msgjt-eth0 coredns-7c65d6cfc9- kube-system b25db915-a031-4109-9564-cc0834ce0083 932 0 2025-07-12 00:25:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-120 coredns-7c65d6cfc9-msgjt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic47b68d46fc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="55d5be9e0599d4521e0853d793265afe3c6fd026f10dabfb9d1a1ed7b688f7be" Namespace="kube-system" Pod="coredns-7c65d6cfc9-msgjt" WorkloadEndpoint="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--msgjt-" Jul 12 00:26:39.648294 env[1913]: 2025-07-12 00:26:39.228 [INFO][4568] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="55d5be9e0599d4521e0853d793265afe3c6fd026f10dabfb9d1a1ed7b688f7be" Namespace="kube-system" Pod="coredns-7c65d6cfc9-msgjt" WorkloadEndpoint="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--msgjt-eth0" Jul 12 00:26:39.648294 env[1913]: 2025-07-12 00:26:39.433 [INFO][4618] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="55d5be9e0599d4521e0853d793265afe3c6fd026f10dabfb9d1a1ed7b688f7be" HandleID="k8s-pod-network.55d5be9e0599d4521e0853d793265afe3c6fd026f10dabfb9d1a1ed7b688f7be" Workload="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--msgjt-eth0" Jul 12 00:26:39.648294 env[1913]: 2025-07-12 00:26:39.433 [INFO][4618] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="55d5be9e0599d4521e0853d793265afe3c6fd026f10dabfb9d1a1ed7b688f7be" HandleID="k8s-pod-network.55d5be9e0599d4521e0853d793265afe3c6fd026f10dabfb9d1a1ed7b688f7be" Workload="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--msgjt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b290), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-120", "pod":"coredns-7c65d6cfc9-msgjt", "timestamp":"2025-07-12 00:26:39.433506468 +0000 UTC"}, Hostname:"ip-172-31-29-120", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:26:39.648294 env[1913]: 2025-07-12 00:26:39.433 [INFO][4618] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:26:39.648294 env[1913]: 2025-07-12 00:26:39.434 [INFO][4618] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:26:39.648294 env[1913]: 2025-07-12 00:26:39.434 [INFO][4618] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-120' Jul 12 00:26:39.648294 env[1913]: 2025-07-12 00:26:39.454 [INFO][4618] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.55d5be9e0599d4521e0853d793265afe3c6fd026f10dabfb9d1a1ed7b688f7be" host="ip-172-31-29-120" Jul 12 00:26:39.648294 env[1913]: 2025-07-12 00:26:39.480 [INFO][4618] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-120" Jul 12 00:26:39.648294 env[1913]: 2025-07-12 00:26:39.487 [INFO][4618] ipam/ipam.go 511: Trying affinity for 192.168.107.192/26 host="ip-172-31-29-120" Jul 12 00:26:39.648294 env[1913]: 2025-07-12 00:26:39.490 [INFO][4618] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.192/26 host="ip-172-31-29-120" Jul 12 00:26:39.648294 env[1913]: 2025-07-12 00:26:39.513 [INFO][4618] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.192/26 host="ip-172-31-29-120" Jul 12 00:26:39.648294 env[1913]: 2025-07-12 00:26:39.513 [INFO][4618] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.192/26 handle="k8s-pod-network.55d5be9e0599d4521e0853d793265afe3c6fd026f10dabfb9d1a1ed7b688f7be" host="ip-172-31-29-120" Jul 12 00:26:39.648294 env[1913]: 2025-07-12 00:26:39.519 [INFO][4618] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.55d5be9e0599d4521e0853d793265afe3c6fd026f10dabfb9d1a1ed7b688f7be Jul 12 00:26:39.648294 env[1913]: 2025-07-12 00:26:39.543 [INFO][4618] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.192/26 handle="k8s-pod-network.55d5be9e0599d4521e0853d793265afe3c6fd026f10dabfb9d1a1ed7b688f7be" host="ip-172-31-29-120" Jul 12 00:26:39.648294 env[1913]: 2025-07-12 00:26:39.558 [INFO][4618] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.194/26] block=192.168.107.192/26 handle="k8s-pod-network.55d5be9e0599d4521e0853d793265afe3c6fd026f10dabfb9d1a1ed7b688f7be" host="ip-172-31-29-120" Jul 12 00:26:39.648294 env[1913]: 2025-07-12 00:26:39.558 [INFO][4618] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.194/26] handle="k8s-pod-network.55d5be9e0599d4521e0853d793265afe3c6fd026f10dabfb9d1a1ed7b688f7be" host="ip-172-31-29-120" Jul 12 00:26:39.648294 env[1913]: 2025-07-12 00:26:39.558 [INFO][4618] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:26:39.648294 env[1913]: 2025-07-12 00:26:39.558 [INFO][4618] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.194/26] IPv6=[] ContainerID="55d5be9e0599d4521e0853d793265afe3c6fd026f10dabfb9d1a1ed7b688f7be" HandleID="k8s-pod-network.55d5be9e0599d4521e0853d793265afe3c6fd026f10dabfb9d1a1ed7b688f7be" Workload="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--msgjt-eth0" Jul 12 00:26:39.652118 env[1913]: 2025-07-12 00:26:39.570 [INFO][4568] cni-plugin/k8s.go 418: Populated endpoint ContainerID="55d5be9e0599d4521e0853d793265afe3c6fd026f10dabfb9d1a1ed7b688f7be" Namespace="kube-system" Pod="coredns-7c65d6cfc9-msgjt" WorkloadEndpoint="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--msgjt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--120-k8s-coredns--7c65d6cfc9--msgjt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b25db915-a031-4109-9564-cc0834ce0083", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 25, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-120", ContainerID:"", Pod:"coredns-7c65d6cfc9-msgjt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic47b68d46fc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:26:39.652118 env[1913]: 2025-07-12 00:26:39.570 [INFO][4568] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.194/32] ContainerID="55d5be9e0599d4521e0853d793265afe3c6fd026f10dabfb9d1a1ed7b688f7be" Namespace="kube-system" Pod="coredns-7c65d6cfc9-msgjt" WorkloadEndpoint="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--msgjt-eth0" Jul 12 00:26:39.652118 env[1913]: 2025-07-12 00:26:39.570 [INFO][4568] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic47b68d46fc ContainerID="55d5be9e0599d4521e0853d793265afe3c6fd026f10dabfb9d1a1ed7b688f7be" Namespace="kube-system" Pod="coredns-7c65d6cfc9-msgjt" WorkloadEndpoint="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--msgjt-eth0" Jul 12 00:26:39.652118 env[1913]: 2025-07-12 00:26:39.599 [INFO][4568] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="55d5be9e0599d4521e0853d793265afe3c6fd026f10dabfb9d1a1ed7b688f7be" Namespace="kube-system" Pod="coredns-7c65d6cfc9-msgjt" WorkloadEndpoint="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--msgjt-eth0" Jul 12 00:26:39.652118 env[1913]: 2025-07-12 00:26:39.601 [INFO][4568] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="55d5be9e0599d4521e0853d793265afe3c6fd026f10dabfb9d1a1ed7b688f7be" Namespace="kube-system" Pod="coredns-7c65d6cfc9-msgjt" WorkloadEndpoint="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--msgjt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--120-k8s-coredns--7c65d6cfc9--msgjt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b25db915-a031-4109-9564-cc0834ce0083", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 25, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-120", ContainerID:"55d5be9e0599d4521e0853d793265afe3c6fd026f10dabfb9d1a1ed7b688f7be", Pod:"coredns-7c65d6cfc9-msgjt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic47b68d46fc", MAC:"8e:92:ad:b2:03:bd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:26:39.652118 env[1913]: 2025-07-12 00:26:39.644 [INFO][4568] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="55d5be9e0599d4521e0853d793265afe3c6fd026f10dabfb9d1a1ed7b688f7be" Namespace="kube-system" Pod="coredns-7c65d6cfc9-msgjt" WorkloadEndpoint="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--msgjt-eth0" Jul 12 00:26:39.672069 systemd-networkd[1586]: vxlan.calico: Gained IPv6LL Jul 12 00:26:39.705424 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliecb817ff36d: link becomes ready Jul 12 00:26:39.704617 systemd-networkd[1586]: caliecb817ff36d: Link UP Jul 12 00:26:39.705040 systemd-networkd[1586]: caliecb817ff36d: Gained carrier Jul 12 00:26:39.713000 audit[4657]: NETFILTER_CFG table=filter:109 family=2 entries=42 op=nft_register_chain pid=4657 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 12 00:26:39.713000 audit[4657]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22552 a0=3 a1=ffffd44f4130 a2=0 a3=ffff853dafa8 items=0 ppid=4315 pid=4657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:39.713000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 12 00:26:39.777910 env[1913]: 2025-07-12 00:26:39.423 [INFO][4600] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--120-k8s-coredns--7c65d6cfc9--6g88r-eth0 coredns-7c65d6cfc9- kube-system 08c64d92-9452-47f2-8a8c-8837e4813c7d 933 0 2025-07-12 00:25:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-120 coredns-7c65d6cfc9-6g88r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliecb817ff36d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7f610f9b872029e3d2bb2673d5e3147c30df90e05f37546689a0bb39eb45e714" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6g88r" WorkloadEndpoint="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--6g88r-" Jul 12 00:26:39.777910 env[1913]: 2025-07-12 00:26:39.423 [INFO][4600] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f610f9b872029e3d2bb2673d5e3147c30df90e05f37546689a0bb39eb45e714" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6g88r" WorkloadEndpoint="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--6g88r-eth0" Jul 12 00:26:39.777910 env[1913]: 2025-07-12 00:26:39.539 [INFO][4631] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f610f9b872029e3d2bb2673d5e3147c30df90e05f37546689a0bb39eb45e714" HandleID="k8s-pod-network.7f610f9b872029e3d2bb2673d5e3147c30df90e05f37546689a0bb39eb45e714" Workload="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--6g88r-eth0" Jul 12 00:26:39.777910 env[1913]: 2025-07-12 00:26:39.540 [INFO][4631] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7f610f9b872029e3d2bb2673d5e3147c30df90e05f37546689a0bb39eb45e714" HandleID="k8s-pod-network.7f610f9b872029e3d2bb2673d5e3147c30df90e05f37546689a0bb39eb45e714" Workload="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--6g88r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323f40), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-120", "pod":"coredns-7c65d6cfc9-6g88r", "timestamp":"2025-07-12 00:26:39.539853878 +0000 UTC"}, Hostname:"ip-172-31-29-120", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:26:39.777910 env[1913]: 2025-07-12 00:26:39.540 [INFO][4631] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:26:39.777910 env[1913]: 2025-07-12 00:26:39.559 [INFO][4631] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:26:39.777910 env[1913]: 2025-07-12 00:26:39.560 [INFO][4631] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-120' Jul 12 00:26:39.777910 env[1913]: 2025-07-12 00:26:39.601 [INFO][4631] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7f610f9b872029e3d2bb2673d5e3147c30df90e05f37546689a0bb39eb45e714" host="ip-172-31-29-120" Jul 12 00:26:39.777910 env[1913]: 2025-07-12 00:26:39.608 [INFO][4631] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-120" Jul 12 00:26:39.777910 env[1913]: 2025-07-12 00:26:39.618 [INFO][4631] ipam/ipam.go 511: Trying affinity for 192.168.107.192/26 host="ip-172-31-29-120" Jul 12 00:26:39.777910 env[1913]: 2025-07-12 00:26:39.621 [INFO][4631] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.192/26 host="ip-172-31-29-120" Jul 12 00:26:39.777910 env[1913]: 2025-07-12 00:26:39.625 [INFO][4631] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.192/26 host="ip-172-31-29-120" Jul 12 00:26:39.777910 env[1913]: 2025-07-12 00:26:39.625 [INFO][4631] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.192/26 handle="k8s-pod-network.7f610f9b872029e3d2bb2673d5e3147c30df90e05f37546689a0bb39eb45e714" host="ip-172-31-29-120" Jul 12 00:26:39.777910 env[1913]: 2025-07-12 00:26:39.627 [INFO][4631] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7f610f9b872029e3d2bb2673d5e3147c30df90e05f37546689a0bb39eb45e714 Jul 12 00:26:39.777910 env[1913]: 2025-07-12 00:26:39.642 [INFO][4631] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.192/26 handle="k8s-pod-network.7f610f9b872029e3d2bb2673d5e3147c30df90e05f37546689a0bb39eb45e714" host="ip-172-31-29-120" Jul 12 00:26:39.777910 env[1913]: 2025-07-12 00:26:39.671 [INFO][4631] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.195/26] block=192.168.107.192/26 handle="k8s-pod-network.7f610f9b872029e3d2bb2673d5e3147c30df90e05f37546689a0bb39eb45e714" host="ip-172-31-29-120" Jul 12 00:26:39.777910 env[1913]: 2025-07-12 00:26:39.671 [INFO][4631] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.195/26] handle="k8s-pod-network.7f610f9b872029e3d2bb2673d5e3147c30df90e05f37546689a0bb39eb45e714" host="ip-172-31-29-120" Jul 12 00:26:39.777910 env[1913]: 2025-07-12 00:26:39.671 [INFO][4631] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:26:39.777910 env[1913]: 2025-07-12 00:26:39.671 [INFO][4631] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.195/26] IPv6=[] ContainerID="7f610f9b872029e3d2bb2673d5e3147c30df90e05f37546689a0bb39eb45e714" HandleID="k8s-pod-network.7f610f9b872029e3d2bb2673d5e3147c30df90e05f37546689a0bb39eb45e714" Workload="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--6g88r-eth0" Jul 12 00:26:39.779471 env[1913]: 2025-07-12 00:26:39.680 [INFO][4600] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f610f9b872029e3d2bb2673d5e3147c30df90e05f37546689a0bb39eb45e714" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6g88r" WorkloadEndpoint="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--6g88r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--120-k8s-coredns--7c65d6cfc9--6g88r-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"08c64d92-9452-47f2-8a8c-8837e4813c7d", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 25, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-120", ContainerID:"", Pod:"coredns-7c65d6cfc9-6g88r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliecb817ff36d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:26:39.779471 env[1913]: 2025-07-12 00:26:39.680 [INFO][4600] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.195/32] ContainerID="7f610f9b872029e3d2bb2673d5e3147c30df90e05f37546689a0bb39eb45e714" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6g88r" WorkloadEndpoint="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--6g88r-eth0" Jul 12 00:26:39.779471 env[1913]: 2025-07-12 00:26:39.680 [INFO][4600] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliecb817ff36d ContainerID="7f610f9b872029e3d2bb2673d5e3147c30df90e05f37546689a0bb39eb45e714" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6g88r" WorkloadEndpoint="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--6g88r-eth0" Jul 12 00:26:39.779471 env[1913]: 2025-07-12 00:26:39.708 [INFO][4600] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f610f9b872029e3d2bb2673d5e3147c30df90e05f37546689a0bb39eb45e714" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6g88r" WorkloadEndpoint="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--6g88r-eth0" Jul 12 00:26:39.779471 env[1913]: 2025-07-12 00:26:39.708 [INFO][4600] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f610f9b872029e3d2bb2673d5e3147c30df90e05f37546689a0bb39eb45e714" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6g88r" WorkloadEndpoint="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--6g88r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--120-k8s-coredns--7c65d6cfc9--6g88r-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"08c64d92-9452-47f2-8a8c-8837e4813c7d", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 25, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-120", ContainerID:"7f610f9b872029e3d2bb2673d5e3147c30df90e05f37546689a0bb39eb45e714", Pod:"coredns-7c65d6cfc9-6g88r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliecb817ff36d", MAC:"5a:f0:97:a5:d4:20", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:26:39.779471 env[1913]: 2025-07-12 00:26:39.743 [INFO][4600] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f610f9b872029e3d2bb2673d5e3147c30df90e05f37546689a0bb39eb45e714" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6g88r" WorkloadEndpoint="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--6g88r-eth0" Jul 12 00:26:39.840810 env[1913]: time="2025-07-12T00:26:39.838955037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:26:39.840810 env[1913]: time="2025-07-12T00:26:39.839044739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:26:39.840810 env[1913]: time="2025-07-12T00:26:39.839072184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:26:39.840810 env[1913]: time="2025-07-12T00:26:39.839733061Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/55d5be9e0599d4521e0853d793265afe3c6fd026f10dabfb9d1a1ed7b688f7be pid=4678 runtime=io.containerd.runc.v2 Jul 12 00:26:39.852000 audit[4690]: NETFILTER_CFG table=filter:110 family=2 entries=36 op=nft_register_chain pid=4690 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 12 00:26:39.852000 audit[4690]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19156 a0=3 a1=fffff9d44400 a2=0 a3=ffff9381afa8 items=0 ppid=4315 pid=4690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:39.852000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 12 00:26:39.864037 systemd-networkd[1586]: calib7ab39c51fe: Link UP Jul 12 00:26:39.883630 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib7ab39c51fe: link becomes ready Jul 12 00:26:39.883204 systemd-networkd[1586]: calib7ab39c51fe: Gained carrier Jul 12 00:26:39.930848 env[1913]: time="2025-07-12T00:26:39.930653816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:26:39.931118 env[1913]: time="2025-07-12T00:26:39.931064633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:26:39.931339 env[1913]: time="2025-07-12T00:26:39.931287633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:26:39.931830 env[1913]: time="2025-07-12T00:26:39.931767283Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f610f9b872029e3d2bb2673d5e3147c30df90e05f37546689a0bb39eb45e714 pid=4694 runtime=io.containerd.runc.v2 Jul 12 00:26:39.932622 env[1913]: 2025-07-12 00:26:39.461 [INFO][4585] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--120-k8s-csi--node--driver--g7wxf-eth0 csi-node-driver- calico-system 355545c7-e2b3-4e21-bab3-2e3ea1245fce 931 0 2025-07-12 00:26:14 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-29-120 csi-node-driver-g7wxf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib7ab39c51fe [] [] }} ContainerID="89020aaf13edd8f4e41e6352c6e10f4894246d1f5f0a17682ab1687e04ee8af7" Namespace="calico-system" Pod="csi-node-driver-g7wxf" WorkloadEndpoint="ip--172--31--29--120-k8s-csi--node--driver--g7wxf-" Jul 12 00:26:39.932622 env[1913]: 2025-07-12 00:26:39.461 [INFO][4585] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="89020aaf13edd8f4e41e6352c6e10f4894246d1f5f0a17682ab1687e04ee8af7" Namespace="calico-system" Pod="csi-node-driver-g7wxf" WorkloadEndpoint="ip--172--31--29--120-k8s-csi--node--driver--g7wxf-eth0" Jul 12 00:26:39.932622 env[1913]: 2025-07-12 00:26:39.675 [INFO][4639] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89020aaf13edd8f4e41e6352c6e10f4894246d1f5f0a17682ab1687e04ee8af7" HandleID="k8s-pod-network.89020aaf13edd8f4e41e6352c6e10f4894246d1f5f0a17682ab1687e04ee8af7" Workload="ip--172--31--29--120-k8s-csi--node--driver--g7wxf-eth0" Jul 12 00:26:39.932622 env[1913]: 2025-07-12 00:26:39.677 [INFO][4639] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="89020aaf13edd8f4e41e6352c6e10f4894246d1f5f0a17682ab1687e04ee8af7" HandleID="k8s-pod-network.89020aaf13edd8f4e41e6352c6e10f4894246d1f5f0a17682ab1687e04ee8af7" Workload="ip--172--31--29--120-k8s-csi--node--driver--g7wxf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd110), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-120", "pod":"csi-node-driver-g7wxf", "timestamp":"2025-07-12 00:26:39.675285439 +0000 UTC"}, Hostname:"ip-172-31-29-120", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:26:39.932622 env[1913]: 2025-07-12 00:26:39.677 [INFO][4639] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:26:39.932622 env[1913]: 2025-07-12 00:26:39.677 [INFO][4639] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:26:39.932622 env[1913]: 2025-07-12 00:26:39.678 [INFO][4639] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-120' Jul 12 00:26:39.932622 env[1913]: 2025-07-12 00:26:39.754 [INFO][4639] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.89020aaf13edd8f4e41e6352c6e10f4894246d1f5f0a17682ab1687e04ee8af7" host="ip-172-31-29-120" Jul 12 00:26:39.932622 env[1913]: 2025-07-12 00:26:39.763 [INFO][4639] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-120" Jul 12 00:26:39.932622 env[1913]: 2025-07-12 00:26:39.772 [INFO][4639] ipam/ipam.go 511: Trying affinity for 192.168.107.192/26 host="ip-172-31-29-120" Jul 12 00:26:39.932622 env[1913]: 2025-07-12 00:26:39.780 [INFO][4639] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.192/26 host="ip-172-31-29-120" Jul 12 00:26:39.932622 env[1913]: 2025-07-12 00:26:39.791 [INFO][4639] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.192/26 host="ip-172-31-29-120" Jul 12 00:26:39.932622 env[1913]: 2025-07-12 00:26:39.793 [INFO][4639] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.192/26 handle="k8s-pod-network.89020aaf13edd8f4e41e6352c6e10f4894246d1f5f0a17682ab1687e04ee8af7" host="ip-172-31-29-120" Jul 12 00:26:39.932622 env[1913]: 2025-07-12 00:26:39.795 [INFO][4639] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.89020aaf13edd8f4e41e6352c6e10f4894246d1f5f0a17682ab1687e04ee8af7 Jul 12 00:26:39.932622 env[1913]: 2025-07-12 00:26:39.804 [INFO][4639] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.192/26 handle="k8s-pod-network.89020aaf13edd8f4e41e6352c6e10f4894246d1f5f0a17682ab1687e04ee8af7" host="ip-172-31-29-120" Jul 12 00:26:39.932622 env[1913]: 2025-07-12 00:26:39.826 [INFO][4639] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.196/26] block=192.168.107.192/26 handle="k8s-pod-network.89020aaf13edd8f4e41e6352c6e10f4894246d1f5f0a17682ab1687e04ee8af7" host="ip-172-31-29-120" Jul 12 00:26:39.932622 env[1913]: 2025-07-12 00:26:39.826 [INFO][4639] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.196/26] handle="k8s-pod-network.89020aaf13edd8f4e41e6352c6e10f4894246d1f5f0a17682ab1687e04ee8af7" host="ip-172-31-29-120" Jul 12 00:26:39.932622 env[1913]: 2025-07-12 00:26:39.826 [INFO][4639] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:26:39.932622 env[1913]: 2025-07-12 00:26:39.826 [INFO][4639] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.196/26] IPv6=[] ContainerID="89020aaf13edd8f4e41e6352c6e10f4894246d1f5f0a17682ab1687e04ee8af7" HandleID="k8s-pod-network.89020aaf13edd8f4e41e6352c6e10f4894246d1f5f0a17682ab1687e04ee8af7" Workload="ip--172--31--29--120-k8s-csi--node--driver--g7wxf-eth0" Jul 12 00:26:39.934301 env[1913]: 2025-07-12 00:26:39.830 [INFO][4585] cni-plugin/k8s.go 418: Populated endpoint ContainerID="89020aaf13edd8f4e41e6352c6e10f4894246d1f5f0a17682ab1687e04ee8af7" Namespace="calico-system" Pod="csi-node-driver-g7wxf" WorkloadEndpoint="ip--172--31--29--120-k8s-csi--node--driver--g7wxf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--120-k8s-csi--node--driver--g7wxf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"355545c7-e2b3-4e21-bab3-2e3ea1245fce", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-120", ContainerID:"", Pod:"csi-node-driver-g7wxf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.107.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib7ab39c51fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:26:39.934301 env[1913]: 2025-07-12 00:26:39.830 [INFO][4585] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.196/32] ContainerID="89020aaf13edd8f4e41e6352c6e10f4894246d1f5f0a17682ab1687e04ee8af7" Namespace="calico-system" Pod="csi-node-driver-g7wxf" WorkloadEndpoint="ip--172--31--29--120-k8s-csi--node--driver--g7wxf-eth0" Jul 12 00:26:39.934301 env[1913]: 2025-07-12 00:26:39.830 [INFO][4585] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib7ab39c51fe ContainerID="89020aaf13edd8f4e41e6352c6e10f4894246d1f5f0a17682ab1687e04ee8af7" Namespace="calico-system" Pod="csi-node-driver-g7wxf" WorkloadEndpoint="ip--172--31--29--120-k8s-csi--node--driver--g7wxf-eth0" Jul 12 00:26:39.934301 env[1913]: 2025-07-12 00:26:39.886 [INFO][4585] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89020aaf13edd8f4e41e6352c6e10f4894246d1f5f0a17682ab1687e04ee8af7" Namespace="calico-system" Pod="csi-node-driver-g7wxf" WorkloadEndpoint="ip--172--31--29--120-k8s-csi--node--driver--g7wxf-eth0" Jul 12 00:26:39.934301 env[1913]: 2025-07-12 00:26:39.896 [INFO][4585] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="89020aaf13edd8f4e41e6352c6e10f4894246d1f5f0a17682ab1687e04ee8af7" Namespace="calico-system" Pod="csi-node-driver-g7wxf" WorkloadEndpoint="ip--172--31--29--120-k8s-csi--node--driver--g7wxf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--120-k8s-csi--node--driver--g7wxf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"355545c7-e2b3-4e21-bab3-2e3ea1245fce", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-120", ContainerID:"89020aaf13edd8f4e41e6352c6e10f4894246d1f5f0a17682ab1687e04ee8af7", Pod:"csi-node-driver-g7wxf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.107.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib7ab39c51fe", MAC:"da:a9:e7:91:0d:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:26:39.934301 env[1913]: 2025-07-12 00:26:39.920 [INFO][4585] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="89020aaf13edd8f4e41e6352c6e10f4894246d1f5f0a17682ab1687e04ee8af7" Namespace="calico-system" Pod="csi-node-driver-g7wxf" WorkloadEndpoint="ip--172--31--29--120-k8s-csi--node--driver--g7wxf-eth0" Jul 12 00:26:39.977000 audit[4720]: NETFILTER_CFG table=filter:111 family=2 entries=44 op=nft_register_chain pid=4720 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 12 00:26:39.977000 audit[4720]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21952 a0=3 a1=fffffcd7cec0 a2=0 a3=ffff90b46fa8 items=0 ppid=4315 pid=4720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:39.977000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 12 00:26:40.043814 env[1913]: time="2025-07-12T00:26:40.033075441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:26:40.043814 env[1913]: time="2025-07-12T00:26:40.033145018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:26:40.043814 env[1913]: time="2025-07-12T00:26:40.033500153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:26:40.043814 env[1913]: time="2025-07-12T00:26:40.034832635Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/89020aaf13edd8f4e41e6352c6e10f4894246d1f5f0a17682ab1687e04ee8af7 pid=4730 runtime=io.containerd.runc.v2 Jul 12 00:26:40.224529 env[1913]: time="2025-07-12T00:26:40.224472278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-msgjt,Uid:b25db915-a031-4109-9564-cc0834ce0083,Namespace:kube-system,Attempt:1,} returns sandbox id \"55d5be9e0599d4521e0853d793265afe3c6fd026f10dabfb9d1a1ed7b688f7be\"" Jul 12 00:26:40.240650 env[1913]: time="2025-07-12T00:26:40.240551450Z" level=info msg="CreateContainer within sandbox \"55d5be9e0599d4521e0853d793265afe3c6fd026f10dabfb9d1a1ed7b688f7be\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:26:40.242426 env[1913]: time="2025-07-12T00:26:40.242321988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6g88r,Uid:08c64d92-9452-47f2-8a8c-8837e4813c7d,Namespace:kube-system,Attempt:1,} returns sandbox id \"7f610f9b872029e3d2bb2673d5e3147c30df90e05f37546689a0bb39eb45e714\"" Jul 12 00:26:40.270173 env[1913]: time="2025-07-12T00:26:40.270104711Z" level=info msg="CreateContainer within sandbox \"7f610f9b872029e3d2bb2673d5e3147c30df90e05f37546689a0bb39eb45e714\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:26:40.314153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1241252153.mount: Deactivated successfully. Jul 12 00:26:40.333003 env[1913]: time="2025-07-12T00:26:40.332900921Z" level=info msg="CreateContainer within sandbox \"55d5be9e0599d4521e0853d793265afe3c6fd026f10dabfb9d1a1ed7b688f7be\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"25afcba74f3ed4c2a77e33ea6cc289e89a324814c4550ebdb8ef0e7d622774c2\"" Jul 12 00:26:40.336925 env[1913]: time="2025-07-12T00:26:40.334437251Z" level=info msg="StartContainer for \"25afcba74f3ed4c2a77e33ea6cc289e89a324814c4550ebdb8ef0e7d622774c2\"" Jul 12 00:26:40.348990 env[1913]: time="2025-07-12T00:26:40.348913060Z" level=info msg="CreateContainer within sandbox \"7f610f9b872029e3d2bb2673d5e3147c30df90e05f37546689a0bb39eb45e714\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"21c2cd4a9810df3d392cc34ba467a2d2557b184ff220ef7ac0050f9ed904b266\"" Jul 12 00:26:40.349538 env[1913]: time="2025-07-12T00:26:40.349180125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g7wxf,Uid:355545c7-e2b3-4e21-bab3-2e3ea1245fce,Namespace:calico-system,Attempt:1,} returns sandbox id \"89020aaf13edd8f4e41e6352c6e10f4894246d1f5f0a17682ab1687e04ee8af7\"" Jul 12 00:26:40.352540 env[1913]: time="2025-07-12T00:26:40.352462273Z" level=info msg="StartContainer for \"21c2cd4a9810df3d392cc34ba467a2d2557b184ff220ef7ac0050f9ed904b266\"" Jul 12 00:26:40.389293 env[1913]: time="2025-07-12T00:26:40.389019362Z" level=info msg="StopPodSandbox for \"d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8\"" Jul 12 00:26:40.587580 env[1913]: time="2025-07-12T00:26:40.587433503Z" level=info msg="StartContainer for \"25afcba74f3ed4c2a77e33ea6cc289e89a324814c4550ebdb8ef0e7d622774c2\" returns successfully" Jul 12 00:26:40.639821 env[1913]: time="2025-07-12T00:26:40.637328359Z" level=info msg="StartContainer for \"21c2cd4a9810df3d392cc34ba467a2d2557b184ff220ef7ac0050f9ed904b266\" returns successfully" Jul 12 00:26:40.693987 systemd-networkd[1586]: calic47b68d46fc: Gained IPv6LL Jul 12 00:26:40.891817 kubelet[2983]: I0712 00:26:40.888555 2983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-6g88r" podStartSLOduration=53.888531788 podStartE2EDuration="53.888531788s" podCreationTimestamp="2025-07-12 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:26:40.831463049 +0000 UTC m=+57.804664081" watchObservedRunningTime="2025-07-12 00:26:40.888531788 +0000 UTC m=+57.861732820" Jul 12 00:26:40.891817 kubelet[2983]: I0712 00:26:40.890010 2983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-msgjt" podStartSLOduration=53.889989901 podStartE2EDuration="53.889989901s" podCreationTimestamp="2025-07-12 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:26:40.887056132 +0000 UTC m=+57.860257152" watchObservedRunningTime="2025-07-12 00:26:40.889989901 +0000 UTC m=+57.863190909" Jul 12 00:26:40.976000 audit[4902]: NETFILTER_CFG table=filter:112 family=2 entries=20 op=nft_register_rule pid=4902 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:40.976000 audit[4902]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd3fd3630 a2=0 a3=1 items=0 ppid=3133 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:40.976000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:40.985000 audit[4902]: NETFILTER_CFG table=nat:113 family=2 entries=14 op=nft_register_rule pid=4902 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:40.985000 audit[4902]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffd3fd3630 a2=0 a3=1 items=0 ppid=3133 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:40.985000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:40.996298 env[1913]: 2025-07-12 00:26:40.731 [INFO][4863] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" Jul 12 00:26:40.996298 env[1913]: 2025-07-12 00:26:40.731 [INFO][4863] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" iface="eth0" netns="/var/run/netns/cni-051421a8-cf7b-403f-96da-0dbbc8288186" Jul 12 00:26:40.996298 env[1913]: 2025-07-12 00:26:40.731 [INFO][4863] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" iface="eth0" netns="/var/run/netns/cni-051421a8-cf7b-403f-96da-0dbbc8288186" Jul 12 00:26:40.996298 env[1913]: 2025-07-12 00:26:40.732 [INFO][4863] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" iface="eth0" netns="/var/run/netns/cni-051421a8-cf7b-403f-96da-0dbbc8288186" Jul 12 00:26:40.996298 env[1913]: 2025-07-12 00:26:40.732 [INFO][4863] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" Jul 12 00:26:40.996298 env[1913]: 2025-07-12 00:26:40.732 [INFO][4863] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" Jul 12 00:26:40.996298 env[1913]: 2025-07-12 00:26:40.891 [INFO][4893] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" HandleID="k8s-pod-network.d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" Workload="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--bxk4f-eth0" Jul 12 00:26:40.996298 env[1913]: 2025-07-12 00:26:40.892 [INFO][4893] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:26:40.996298 env[1913]: 2025-07-12 00:26:40.892 [INFO][4893] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:26:40.996298 env[1913]: 2025-07-12 00:26:40.942 [WARNING][4893] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" HandleID="k8s-pod-network.d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" Workload="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--bxk4f-eth0" Jul 12 00:26:40.996298 env[1913]: 2025-07-12 00:26:40.943 [INFO][4893] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" HandleID="k8s-pod-network.d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" Workload="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--bxk4f-eth0" Jul 12 00:26:40.996298 env[1913]: 2025-07-12 00:26:40.957 [INFO][4893] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:26:40.996298 env[1913]: 2025-07-12 00:26:40.975 [INFO][4863] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" Jul 12 00:26:40.997785 env[1913]: time="2025-07-12T00:26:40.996319223Z" level=info msg="TearDown network for sandbox \"d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8\" successfully" Jul 12 00:26:40.997785 env[1913]: time="2025-07-12T00:26:40.996367116Z" level=info msg="StopPodSandbox for \"d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8\" returns successfully" Jul 12 00:26:40.997785 env[1913]: time="2025-07-12T00:26:40.997260702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8494455ff7-bxk4f,Uid:c1b2666c-1d6f-4ba9-9d83-e51550e0fc3d,Namespace:calico-apiserver,Attempt:1,}" Jul 12 00:26:41.064000 audit[4912]: NETFILTER_CFG table=filter:114 family=2 entries=17 op=nft_register_rule pid=4912 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:41.064000 audit[4912]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc6edb880 a2=0 a3=1 items=0 ppid=3133 pid=4912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:41.064000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:41.069000 audit[4912]: NETFILTER_CFG table=nat:115 family=2 entries=35 op=nft_register_chain pid=4912 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:41.069000 audit[4912]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffc6edb880 a2=0 a3=1 items=0 ppid=3133 pid=4912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:41.069000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:41.141896 systemd-networkd[1586]: caliecb817ff36d: Gained IPv6LL Jul 12 00:26:41.191686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount597957032.mount: Deactivated successfully. Jul 12 00:26:41.192213 systemd[1]: run-netns-cni\x2d051421a8\x2dcf7b\x2d403f\x2d96da\x2d0dbbc8288186.mount: Deactivated successfully. Jul 12 00:26:41.205856 systemd-networkd[1586]: calib7ab39c51fe: Gained IPv6LL Jul 12 00:26:41.383588 systemd-networkd[1586]: cali93d6a4cb797: Link UP Jul 12 00:26:41.392866 env[1913]: time="2025-07-12T00:26:41.392724254Z" level=info msg="StopPodSandbox for \"607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a\"" Jul 12 00:26:41.398386 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 12 00:26:41.399049 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali93d6a4cb797: link becomes ready Jul 12 00:26:41.398824 systemd-networkd[1586]: cali93d6a4cb797: Gained carrier Jul 12 00:26:41.410945 env[1913]: time="2025-07-12T00:26:41.410830449Z" level=info msg="StopPodSandbox for \"a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431\"" Jul 12 00:26:41.433801 env[1913]: time="2025-07-12T00:26:41.426206460Z" level=info msg="StopPodSandbox for \"503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3\"" Jul 12 00:26:41.468859 env[1913]: 2025-07-12 00:26:41.184 [INFO][4904] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--bxk4f-eth0 calico-apiserver-8494455ff7- calico-apiserver c1b2666c-1d6f-4ba9-9d83-e51550e0fc3d 954 0 2025-07-12 00:26:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8494455ff7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-29-120 calico-apiserver-8494455ff7-bxk4f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali93d6a4cb797 [] [] }} ContainerID="734b74e9a2fff41ce9219ca962abe2d7890ebbf64b1024712ba431fd3511eed1" Namespace="calico-apiserver" Pod="calico-apiserver-8494455ff7-bxk4f" WorkloadEndpoint="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--bxk4f-" Jul 12 00:26:41.468859 env[1913]: 2025-07-12 00:26:41.184 [INFO][4904] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="734b74e9a2fff41ce9219ca962abe2d7890ebbf64b1024712ba431fd3511eed1" Namespace="calico-apiserver" Pod="calico-apiserver-8494455ff7-bxk4f" WorkloadEndpoint="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--bxk4f-eth0" Jul 12 00:26:41.468859 env[1913]: 2025-07-12 00:26:41.285 [INFO][4919] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="734b74e9a2fff41ce9219ca962abe2d7890ebbf64b1024712ba431fd3511eed1" HandleID="k8s-pod-network.734b74e9a2fff41ce9219ca962abe2d7890ebbf64b1024712ba431fd3511eed1" Workload="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--bxk4f-eth0" Jul 12 00:26:41.468859 env[1913]: 2025-07-12 00:26:41.286 [INFO][4919] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="734b74e9a2fff41ce9219ca962abe2d7890ebbf64b1024712ba431fd3511eed1" HandleID="k8s-pod-network.734b74e9a2fff41ce9219ca962abe2d7890ebbf64b1024712ba431fd3511eed1" Workload="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--bxk4f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000322140), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-29-120", "pod":"calico-apiserver-8494455ff7-bxk4f", "timestamp":"2025-07-12 00:26:41.2859041 +0000 UTC"}, Hostname:"ip-172-31-29-120", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:26:41.468859 env[1913]: 2025-07-12 00:26:41.286 [INFO][4919] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:26:41.468859 env[1913]: 2025-07-12 00:26:41.286 [INFO][4919] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:26:41.468859 env[1913]: 2025-07-12 00:26:41.286 [INFO][4919] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-120' Jul 12 00:26:41.468859 env[1913]: 2025-07-12 00:26:41.303 [INFO][4919] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.734b74e9a2fff41ce9219ca962abe2d7890ebbf64b1024712ba431fd3511eed1" host="ip-172-31-29-120" Jul 12 00:26:41.468859 env[1913]: 2025-07-12 00:26:41.312 [INFO][4919] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-120" Jul 12 00:26:41.468859 env[1913]: 2025-07-12 00:26:41.323 [INFO][4919] ipam/ipam.go 511: Trying affinity for 192.168.107.192/26 host="ip-172-31-29-120" Jul 12 00:26:41.468859 env[1913]: 2025-07-12 00:26:41.327 [INFO][4919] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.192/26 host="ip-172-31-29-120" Jul 12 00:26:41.468859 env[1913]: 2025-07-12 00:26:41.331 [INFO][4919] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.192/26 host="ip-172-31-29-120" Jul 12 00:26:41.468859 env[1913]: 2025-07-12 00:26:41.332 [INFO][4919] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.192/26 handle="k8s-pod-network.734b74e9a2fff41ce9219ca962abe2d7890ebbf64b1024712ba431fd3511eed1" host="ip-172-31-29-120" Jul 12 00:26:41.468859 env[1913]: 2025-07-12 00:26:41.335 [INFO][4919] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.734b74e9a2fff41ce9219ca962abe2d7890ebbf64b1024712ba431fd3511eed1 Jul 12 00:26:41.468859 env[1913]: 2025-07-12 00:26:41.343 [INFO][4919] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.192/26 handle="k8s-pod-network.734b74e9a2fff41ce9219ca962abe2d7890ebbf64b1024712ba431fd3511eed1" host="ip-172-31-29-120" Jul 12 00:26:41.468859 env[1913]: 2025-07-12 00:26:41.359 [INFO][4919] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.197/26] block=192.168.107.192/26 handle="k8s-pod-network.734b74e9a2fff41ce9219ca962abe2d7890ebbf64b1024712ba431fd3511eed1" host="ip-172-31-29-120" Jul 12 00:26:41.468859 env[1913]: 2025-07-12 00:26:41.359 [INFO][4919] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.197/26] handle="k8s-pod-network.734b74e9a2fff41ce9219ca962abe2d7890ebbf64b1024712ba431fd3511eed1" host="ip-172-31-29-120" Jul 12 00:26:41.468859 env[1913]: 2025-07-12 00:26:41.359 [INFO][4919] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:26:41.468859 env[1913]: 2025-07-12 00:26:41.359 [INFO][4919] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.197/26] IPv6=[] ContainerID="734b74e9a2fff41ce9219ca962abe2d7890ebbf64b1024712ba431fd3511eed1" HandleID="k8s-pod-network.734b74e9a2fff41ce9219ca962abe2d7890ebbf64b1024712ba431fd3511eed1" Workload="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--bxk4f-eth0" Jul 12 00:26:41.471933 env[1913]: 2025-07-12 00:26:41.363 [INFO][4904] cni-plugin/k8s.go 418: Populated endpoint ContainerID="734b74e9a2fff41ce9219ca962abe2d7890ebbf64b1024712ba431fd3511eed1" Namespace="calico-apiserver" Pod="calico-apiserver-8494455ff7-bxk4f" WorkloadEndpoint="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--bxk4f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--bxk4f-eth0", GenerateName:"calico-apiserver-8494455ff7-", Namespace:"calico-apiserver", SelfLink:"", UID:"c1b2666c-1d6f-4ba9-9d83-e51550e0fc3d", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8494455ff7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-120", ContainerID:"", Pod:"calico-apiserver-8494455ff7-bxk4f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali93d6a4cb797", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:26:41.471933 env[1913]: 2025-07-12 00:26:41.364 [INFO][4904] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.197/32] ContainerID="734b74e9a2fff41ce9219ca962abe2d7890ebbf64b1024712ba431fd3511eed1" Namespace="calico-apiserver" Pod="calico-apiserver-8494455ff7-bxk4f" WorkloadEndpoint="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--bxk4f-eth0" Jul 12 00:26:41.471933 env[1913]: 2025-07-12 00:26:41.364 [INFO][4904] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali93d6a4cb797 ContainerID="734b74e9a2fff41ce9219ca962abe2d7890ebbf64b1024712ba431fd3511eed1" Namespace="calico-apiserver" Pod="calico-apiserver-8494455ff7-bxk4f" WorkloadEndpoint="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--bxk4f-eth0" Jul 12 00:26:41.471933 env[1913]: 2025-07-12 00:26:41.401 [INFO][4904] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="734b74e9a2fff41ce9219ca962abe2d7890ebbf64b1024712ba431fd3511eed1" Namespace="calico-apiserver" Pod="calico-apiserver-8494455ff7-bxk4f" WorkloadEndpoint="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--bxk4f-eth0" Jul 12 00:26:41.471933 env[1913]: 2025-07-12 00:26:41.401 [INFO][4904] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="734b74e9a2fff41ce9219ca962abe2d7890ebbf64b1024712ba431fd3511eed1" Namespace="calico-apiserver" Pod="calico-apiserver-8494455ff7-bxk4f" WorkloadEndpoint="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--bxk4f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--bxk4f-eth0", GenerateName:"calico-apiserver-8494455ff7-", Namespace:"calico-apiserver", SelfLink:"", UID:"c1b2666c-1d6f-4ba9-9d83-e51550e0fc3d", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8494455ff7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-120", ContainerID:"734b74e9a2fff41ce9219ca962abe2d7890ebbf64b1024712ba431fd3511eed1", Pod:"calico-apiserver-8494455ff7-bxk4f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali93d6a4cb797", MAC:"72:1a:d1:16:f2:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:26:41.471933 env[1913]: 2025-07-12 00:26:41.426 [INFO][4904] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="734b74e9a2fff41ce9219ca962abe2d7890ebbf64b1024712ba431fd3511eed1" Namespace="calico-apiserver" Pod="calico-apiserver-8494455ff7-bxk4f" WorkloadEndpoint="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--bxk4f-eth0" Jul 12 00:26:41.698622 env[1913]: time="2025-07-12T00:26:41.698398193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:26:41.701592 env[1913]: time="2025-07-12T00:26:41.701497960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:26:41.701924 env[1913]: time="2025-07-12T00:26:41.701847574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:26:41.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.29.120:22-147.75.109.163:34078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:41.710240 systemd[1]: Started sshd@7-172.31.29.120:22-147.75.109.163:34078.service. Jul 12 00:26:41.723050 env[1913]: time="2025-07-12T00:26:41.722878105Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/734b74e9a2fff41ce9219ca962abe2d7890ebbf64b1024712ba431fd3511eed1 pid=4977 runtime=io.containerd.runc.v2 Jul 12 00:26:41.722000 audit[4993]: NETFILTER_CFG table=filter:116 family=2 entries=62 op=nft_register_chain pid=4993 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 12 00:26:41.722000 audit[4993]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=31772 a0=3 a1=ffffdd3f94d0 a2=0 a3=ffff8376bfa8 items=0 ppid=4315 pid=4993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:41.722000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 12 00:26:41.994000 audit[4992]: USER_ACCT pid=4992 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:41.995953 sshd[4992]: Accepted publickey for core from 147.75.109.163 port 34078 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:26:41.997032 kernel: kauditd_printk_skb: 572 callbacks suppressed Jul 12 00:26:41.997122 kernel: audit: type=1101 audit(1752280001.994:420): pid=4992 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:41.999710 sshd[4992]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:26:41.998000 audit[4992]: CRED_ACQ pid=4992 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:42.031368 kernel: audit: type=1103 audit(1752280001.998:421): pid=4992 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:42.045539 systemd[1]: Started session-8.scope. Jul 12 00:26:42.047197 systemd-logind[1905]: New session 8 of user core. Jul 12 00:26:42.054459 kernel: audit: type=1006 audit(1752280001.998:422): pid=4992 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Jul 12 00:26:41.998000 audit[4992]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffbdd1320 a2=3 a3=1 items=0 ppid=1 pid=4992 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:42.089339 kernel: audit: type=1300 audit(1752280001.998:422): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffbdd1320 a2=3 a3=1 items=0 ppid=1 pid=4992 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:41.998000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:26:42.096844 kernel: audit: type=1327 audit(1752280001.998:422): proctitle=737368643A20636F7265205B707269765D Jul 12 00:26:42.087000 audit[4992]: USER_START pid=4992 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:42.113825 kernel: audit: type=1105 audit(1752280002.087:423): pid=4992 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:42.090000 audit[5032]: CRED_ACQ pid=5032 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:42.126187 env[1913]: time="2025-07-12T00:26:42.126130466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8494455ff7-bxk4f,Uid:c1b2666c-1d6f-4ba9-9d83-e51550e0fc3d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"734b74e9a2fff41ce9219ca962abe2d7890ebbf64b1024712ba431fd3511eed1\"" Jul 12 00:26:42.134323 kernel: audit: type=1103 audit(1752280002.090:424): pid=5032 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:42.194118 kernel: audit: type=1325 audit(1752280002.164:425): table=filter:117 family=2 entries=14 op=nft_register_rule pid=5041 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:42.194293 kernel: audit: type=1300 audit(1752280002.164:425): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd3149270 a2=0 a3=1 items=0 ppid=3133 pid=5041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:42.164000 audit[5041]: NETFILTER_CFG table=filter:117 family=2 entries=14 op=nft_register_rule pid=5041 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:42.164000 audit[5041]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd3149270 a2=0 a3=1 items=0 ppid=3133 pid=5041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:42.229349 kernel: audit: type=1327 audit(1752280002.164:425): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:42.164000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:42.231000 audit[5041]: NETFILTER_CFG table=nat:118 family=2 entries=56 op=nft_register_chain pid=5041 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:42.231000 audit[5041]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffd3149270 a2=0 a3=1 items=0 ppid=3133 pid=5041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:42.231000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:42.466507 env[1913]: 2025-07-12 00:26:42.022 [INFO][4959] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" Jul 12 00:26:42.466507 env[1913]: 2025-07-12 00:26:42.023 [INFO][4959] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" iface="eth0" netns="/var/run/netns/cni-083e9291-f07c-a81a-4edb-a1d2382a99b5" Jul 12 00:26:42.466507 env[1913]: 2025-07-12 00:26:42.023 [INFO][4959] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" iface="eth0" netns="/var/run/netns/cni-083e9291-f07c-a81a-4edb-a1d2382a99b5" Jul 12 00:26:42.466507 env[1913]: 2025-07-12 00:26:42.023 [INFO][4959] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" iface="eth0" netns="/var/run/netns/cni-083e9291-f07c-a81a-4edb-a1d2382a99b5" Jul 12 00:26:42.466507 env[1913]: 2025-07-12 00:26:42.023 [INFO][4959] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" Jul 12 00:26:42.466507 env[1913]: 2025-07-12 00:26:42.023 [INFO][4959] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" Jul 12 00:26:42.466507 env[1913]: 2025-07-12 00:26:42.424 [INFO][5033] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" HandleID="k8s-pod-network.503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" Workload="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--gwch8-eth0" Jul 12 00:26:42.466507 env[1913]: 2025-07-12 00:26:42.425 [INFO][5033] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:26:42.466507 env[1913]: 2025-07-12 00:26:42.425 [INFO][5033] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:26:42.466507 env[1913]: 2025-07-12 00:26:42.445 [WARNING][5033] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" HandleID="k8s-pod-network.503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" Workload="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--gwch8-eth0" Jul 12 00:26:42.466507 env[1913]: 2025-07-12 00:26:42.445 [INFO][5033] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" HandleID="k8s-pod-network.503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" Workload="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--gwch8-eth0" Jul 12 00:26:42.466507 env[1913]: 2025-07-12 00:26:42.450 [INFO][5033] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:26:42.466507 env[1913]: 2025-07-12 00:26:42.460 [INFO][4959] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" Jul 12 00:26:42.467731 env[1913]: time="2025-07-12T00:26:42.467676278Z" level=info msg="TearDown network for sandbox \"503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3\" successfully" Jul 12 00:26:42.467888 env[1913]: time="2025-07-12T00:26:42.467853377Z" level=info msg="StopPodSandbox for \"503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3\" returns successfully" Jul 12 00:26:42.470404 env[1913]: time="2025-07-12T00:26:42.470335863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8494455ff7-gwch8,Uid:7051a960-7ce8-45f1-8249-f71049b41599,Namespace:calico-apiserver,Attempt:1,}" Jul 12 00:26:42.473855 systemd[1]: run-netns-cni\x2d083e9291\x2df07c\x2da81a\x2d4edb\x2da1d2382a99b5.mount: Deactivated successfully. Jul 12 00:26:42.550270 sshd[4992]: pam_unix(sshd:session): session closed for user core Jul 12 00:26:42.563000 audit[4992]: USER_END pid=4992 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:42.563000 audit[4992]: CRED_DISP pid=4992 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:42.568746 systemd[1]: sshd@7-172.31.29.120:22-147.75.109.163:34078.service: Deactivated successfully. Jul 12 00:26:42.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.29.120:22-147.75.109.163:34078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:42.573268 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 00:26:42.574510 systemd-logind[1905]: Session 8 logged out. Waiting for processes to exit. Jul 12 00:26:42.577210 systemd-logind[1905]: Removed session 8. Jul 12 00:26:42.580738 env[1913]: 2025-07-12 00:26:42.165 [INFO][4960] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" Jul 12 00:26:42.580738 env[1913]: 2025-07-12 00:26:42.165 [INFO][4960] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" iface="eth0" netns="/var/run/netns/cni-b7ab6dce-a58a-fc9f-4e15-fa33ebbebc37" Jul 12 00:26:42.580738 env[1913]: 2025-07-12 00:26:42.167 [INFO][4960] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" iface="eth0" netns="/var/run/netns/cni-b7ab6dce-a58a-fc9f-4e15-fa33ebbebc37" Jul 12 00:26:42.580738 env[1913]: 2025-07-12 00:26:42.167 [INFO][4960] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" iface="eth0" netns="/var/run/netns/cni-b7ab6dce-a58a-fc9f-4e15-fa33ebbebc37" Jul 12 00:26:42.580738 env[1913]: 2025-07-12 00:26:42.167 [INFO][4960] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" Jul 12 00:26:42.580738 env[1913]: 2025-07-12 00:26:42.168 [INFO][4960] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" Jul 12 00:26:42.580738 env[1913]: 2025-07-12 00:26:42.459 [INFO][5051] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" HandleID="k8s-pod-network.607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" Workload="ip--172--31--29--120-k8s-goldmane--58fd7646b9--p759q-eth0" Jul 12 00:26:42.580738 env[1913]: 2025-07-12 00:26:42.481 [INFO][5051] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:26:42.580738 env[1913]: 2025-07-12 00:26:42.481 [INFO][5051] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:26:42.580738 env[1913]: 2025-07-12 00:26:42.512 [WARNING][5051] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" HandleID="k8s-pod-network.607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" Workload="ip--172--31--29--120-k8s-goldmane--58fd7646b9--p759q-eth0" Jul 12 00:26:42.580738 env[1913]: 2025-07-12 00:26:42.514 [INFO][5051] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" HandleID="k8s-pod-network.607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" Workload="ip--172--31--29--120-k8s-goldmane--58fd7646b9--p759q-eth0" Jul 12 00:26:42.580738 env[1913]: 2025-07-12 00:26:42.524 [INFO][5051] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:26:42.580738 env[1913]: 2025-07-12 00:26:42.554 [INFO][4960] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" Jul 12 00:26:42.587496 systemd[1]: run-netns-cni\x2db7ab6dce\x2da58a\x2dfc9f\x2d4e15\x2dfa33ebbebc37.mount: Deactivated successfully. Jul 12 00:26:42.604237 env[1913]: time="2025-07-12T00:26:42.604150131Z" level=info msg="TearDown network for sandbox \"607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a\" successfully" Jul 12 00:26:42.604407 env[1913]: time="2025-07-12T00:26:42.604212737Z" level=info msg="StopPodSandbox for \"607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a\" returns successfully" Jul 12 00:26:42.616196 env[1913]: time="2025-07-12T00:26:42.616140609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-p759q,Uid:213fb0de-6a80-4aa5-aeb1-a0af932ccfc6,Namespace:calico-system,Attempt:1,}" Jul 12 00:26:42.616754 env[1913]: 2025-07-12 00:26:41.979 [INFO][4955] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" Jul 12 00:26:42.616754 env[1913]: 2025-07-12 00:26:41.979 [INFO][4955] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" iface="eth0" netns="/var/run/netns/cni-ed748ae3-2edc-1fe5-e60b-b33678bc4cf4" Jul 12 00:26:42.616754 env[1913]: 2025-07-12 00:26:41.982 [INFO][4955] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" iface="eth0" netns="/var/run/netns/cni-ed748ae3-2edc-1fe5-e60b-b33678bc4cf4" Jul 12 00:26:42.616754 env[1913]: 2025-07-12 00:26:41.984 [INFO][4955] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" iface="eth0" netns="/var/run/netns/cni-ed748ae3-2edc-1fe5-e60b-b33678bc4cf4" Jul 12 00:26:42.616754 env[1913]: 2025-07-12 00:26:41.984 [INFO][4955] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" Jul 12 00:26:42.616754 env[1913]: 2025-07-12 00:26:41.984 [INFO][4955] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" Jul 12 00:26:42.616754 env[1913]: 2025-07-12 00:26:42.497 [INFO][5034] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" HandleID="k8s-pod-network.a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" Workload="ip--172--31--29--120-k8s-calico--kube--controllers--b9c4d9bf9--swqxk-eth0" Jul 12 00:26:42.616754 env[1913]: 2025-07-12 00:26:42.509 [INFO][5034] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:26:42.616754 env[1913]: 2025-07-12 00:26:42.524 [INFO][5034] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:26:42.616754 env[1913]: 2025-07-12 00:26:42.598 [WARNING][5034] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" HandleID="k8s-pod-network.a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" Workload="ip--172--31--29--120-k8s-calico--kube--controllers--b9c4d9bf9--swqxk-eth0" Jul 12 00:26:42.616754 env[1913]: 2025-07-12 00:26:42.598 [INFO][5034] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" HandleID="k8s-pod-network.a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" Workload="ip--172--31--29--120-k8s-calico--kube--controllers--b9c4d9bf9--swqxk-eth0" Jul 12 00:26:42.616754 env[1913]: 2025-07-12 00:26:42.602 [INFO][5034] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:26:42.616754 env[1913]: 2025-07-12 00:26:42.606 [INFO][4955] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" Jul 12 00:26:42.622645 systemd[1]: run-netns-cni\x2ded748ae3\x2d2edc\x2d1fe5\x2de60b\x2db33678bc4cf4.mount: Deactivated successfully. Jul 12 00:26:42.627495 env[1913]: time="2025-07-12T00:26:42.627411961Z" level=info msg="TearDown network for sandbox \"a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431\" successfully" Jul 12 00:26:42.627495 env[1913]: time="2025-07-12T00:26:42.627485679Z" level=info msg="StopPodSandbox for \"a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431\" returns successfully" Jul 12 00:26:42.628864 env[1913]: time="2025-07-12T00:26:42.628787535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b9c4d9bf9-swqxk,Uid:0398604d-88a0-41c4-996f-ea9a3a6c7de4,Namespace:calico-system,Attempt:1,}" Jul 12 00:26:42.916335 systemd-networkd[1586]: calic4941d131f2: Link UP Jul 12 00:26:42.923416 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 12 00:26:42.923534 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic4941d131f2: link becomes ready Jul 12 00:26:42.926604 systemd-networkd[1586]: calic4941d131f2: Gained carrier Jul 12 00:26:42.947961 systemd-networkd[1586]: cali93d6a4cb797: Gained IPv6LL Jul 12 00:26:42.979878 env[1913]: 2025-07-12 00:26:42.680 [INFO][5065] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--gwch8-eth0 calico-apiserver-8494455ff7- calico-apiserver 7051a960-7ce8-45f1-8249-f71049b41599 1014 0 2025-07-12 00:26:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8494455ff7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-29-120 calico-apiserver-8494455ff7-gwch8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic4941d131f2 [] [] }} ContainerID="2bce4a2ff669ed9ed27e9e09dd730c7cb2d117142ce4f2b40c355e3b5c893604" Namespace="calico-apiserver" Pod="calico-apiserver-8494455ff7-gwch8" WorkloadEndpoint="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--gwch8-" Jul 12 00:26:42.979878 env[1913]: 2025-07-12 00:26:42.681 [INFO][5065] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2bce4a2ff669ed9ed27e9e09dd730c7cb2d117142ce4f2b40c355e3b5c893604" Namespace="calico-apiserver" Pod="calico-apiserver-8494455ff7-gwch8" WorkloadEndpoint="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--gwch8-eth0" Jul 12 00:26:42.979878 env[1913]: 2025-07-12 00:26:42.808 [INFO][5089] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2bce4a2ff669ed9ed27e9e09dd730c7cb2d117142ce4f2b40c355e3b5c893604" HandleID="k8s-pod-network.2bce4a2ff669ed9ed27e9e09dd730c7cb2d117142ce4f2b40c355e3b5c893604" Workload="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--gwch8-eth0" Jul 12 00:26:42.979878 env[1913]: 2025-07-12 00:26:42.813 [INFO][5089] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2bce4a2ff669ed9ed27e9e09dd730c7cb2d117142ce4f2b40c355e3b5c893604" HandleID="k8s-pod-network.2bce4a2ff669ed9ed27e9e09dd730c7cb2d117142ce4f2b40c355e3b5c893604" Workload="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--gwch8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab510), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-29-120", "pod":"calico-apiserver-8494455ff7-gwch8", "timestamp":"2025-07-12 00:26:42.808362693 +0000 UTC"}, Hostname:"ip-172-31-29-120", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:26:42.979878 env[1913]: 2025-07-12 00:26:42.814 [INFO][5089] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:26:42.979878 env[1913]: 2025-07-12 00:26:42.814 [INFO][5089] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:26:42.979878 env[1913]: 2025-07-12 00:26:42.814 [INFO][5089] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-120' Jul 12 00:26:42.979878 env[1913]: 2025-07-12 00:26:42.839 [INFO][5089] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2bce4a2ff669ed9ed27e9e09dd730c7cb2d117142ce4f2b40c355e3b5c893604" host="ip-172-31-29-120" Jul 12 00:26:42.979878 env[1913]: 2025-07-12 00:26:42.846 [INFO][5089] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-120" Jul 12 00:26:42.979878 env[1913]: 2025-07-12 00:26:42.855 [INFO][5089] ipam/ipam.go 511: Trying affinity for 192.168.107.192/26 host="ip-172-31-29-120" Jul 12 00:26:42.979878 env[1913]: 2025-07-12 00:26:42.859 [INFO][5089] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.192/26 host="ip-172-31-29-120" Jul 12 00:26:42.979878 env[1913]: 2025-07-12 00:26:42.864 [INFO][5089] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.192/26 host="ip-172-31-29-120" Jul 12 00:26:42.979878 env[1913]: 2025-07-12 00:26:42.864 [INFO][5089] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.192/26 handle="k8s-pod-network.2bce4a2ff669ed9ed27e9e09dd730c7cb2d117142ce4f2b40c355e3b5c893604" host="ip-172-31-29-120" Jul 12 00:26:42.979878 env[1913]: 2025-07-12 00:26:42.872 [INFO][5089] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2bce4a2ff669ed9ed27e9e09dd730c7cb2d117142ce4f2b40c355e3b5c893604 Jul 12 00:26:42.979878 env[1913]: 2025-07-12 00:26:42.881 [INFO][5089] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.192/26 handle="k8s-pod-network.2bce4a2ff669ed9ed27e9e09dd730c7cb2d117142ce4f2b40c355e3b5c893604" host="ip-172-31-29-120" Jul 12 00:26:42.979878 env[1913]: 2025-07-12 00:26:42.901 [INFO][5089] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.198/26] block=192.168.107.192/26 handle="k8s-pod-network.2bce4a2ff669ed9ed27e9e09dd730c7cb2d117142ce4f2b40c355e3b5c893604" host="ip-172-31-29-120" Jul 12 00:26:42.979878 env[1913]: 2025-07-12 00:26:42.901 [INFO][5089] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.198/26] handle="k8s-pod-network.2bce4a2ff669ed9ed27e9e09dd730c7cb2d117142ce4f2b40c355e3b5c893604" host="ip-172-31-29-120" Jul 12 00:26:42.979878 env[1913]: 2025-07-12 00:26:42.901 [INFO][5089] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:26:42.979878 env[1913]: 2025-07-12 00:26:42.901 [INFO][5089] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.198/26] IPv6=[] ContainerID="2bce4a2ff669ed9ed27e9e09dd730c7cb2d117142ce4f2b40c355e3b5c893604" HandleID="k8s-pod-network.2bce4a2ff669ed9ed27e9e09dd730c7cb2d117142ce4f2b40c355e3b5c893604" Workload="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--gwch8-eth0" Jul 12 00:26:42.981188 env[1913]: 2025-07-12 00:26:42.910 [INFO][5065] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2bce4a2ff669ed9ed27e9e09dd730c7cb2d117142ce4f2b40c355e3b5c893604" Namespace="calico-apiserver" Pod="calico-apiserver-8494455ff7-gwch8" WorkloadEndpoint="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--gwch8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--gwch8-eth0", GenerateName:"calico-apiserver-8494455ff7-", Namespace:"calico-apiserver", SelfLink:"", UID:"7051a960-7ce8-45f1-8249-f71049b41599", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8494455ff7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-120", ContainerID:"", Pod:"calico-apiserver-8494455ff7-gwch8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic4941d131f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:26:42.981188 env[1913]: 2025-07-12 00:26:42.910 [INFO][5065] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.198/32] ContainerID="2bce4a2ff669ed9ed27e9e09dd730c7cb2d117142ce4f2b40c355e3b5c893604" Namespace="calico-apiserver" Pod="calico-apiserver-8494455ff7-gwch8" WorkloadEndpoint="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--gwch8-eth0" Jul 12 00:26:42.981188 env[1913]: 2025-07-12 00:26:42.910 [INFO][5065] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4941d131f2 ContainerID="2bce4a2ff669ed9ed27e9e09dd730c7cb2d117142ce4f2b40c355e3b5c893604" Namespace="calico-apiserver" Pod="calico-apiserver-8494455ff7-gwch8" WorkloadEndpoint="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--gwch8-eth0" Jul 12 00:26:42.981188 env[1913]: 2025-07-12 00:26:42.928 [INFO][5065] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2bce4a2ff669ed9ed27e9e09dd730c7cb2d117142ce4f2b40c355e3b5c893604" Namespace="calico-apiserver" Pod="calico-apiserver-8494455ff7-gwch8" WorkloadEndpoint="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--gwch8-eth0" Jul 12 00:26:42.981188 env[1913]: 2025-07-12 00:26:42.928 [INFO][5065] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2bce4a2ff669ed9ed27e9e09dd730c7cb2d117142ce4f2b40c355e3b5c893604" Namespace="calico-apiserver" Pod="calico-apiserver-8494455ff7-gwch8" WorkloadEndpoint="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--gwch8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--gwch8-eth0", GenerateName:"calico-apiserver-8494455ff7-", Namespace:"calico-apiserver", SelfLink:"", UID:"7051a960-7ce8-45f1-8249-f71049b41599", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8494455ff7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-120", ContainerID:"2bce4a2ff669ed9ed27e9e09dd730c7cb2d117142ce4f2b40c355e3b5c893604", Pod:"calico-apiserver-8494455ff7-gwch8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic4941d131f2", MAC:"fa:21:bb:60:41:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:26:42.981188 env[1913]: 2025-07-12 00:26:42.973 [INFO][5065] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2bce4a2ff669ed9ed27e9e09dd730c7cb2d117142ce4f2b40c355e3b5c893604" Namespace="calico-apiserver" Pod="calico-apiserver-8494455ff7-gwch8" WorkloadEndpoint="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--gwch8-eth0" Jul 12 00:26:43.099626 env[1913]: time="2025-07-12T00:26:43.099513974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:26:43.099964 env[1913]: time="2025-07-12T00:26:43.099877376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:26:43.100214 env[1913]: time="2025-07-12T00:26:43.100149805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:26:43.100875 env[1913]: time="2025-07-12T00:26:43.100795933Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2bce4a2ff669ed9ed27e9e09dd730c7cb2d117142ce4f2b40c355e3b5c893604 pid=5141 runtime=io.containerd.runc.v2 Jul 12 00:26:43.123000 audit[5152]: NETFILTER_CFG table=filter:119 family=2 entries=59 op=nft_register_chain pid=5152 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 12 00:26:43.123000 audit[5152]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29492 a0=3 a1=ffffda818d30 a2=0 a3=ffffb1665fa8 items=0 ppid=4315 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:43.123000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 12 00:26:43.155891 systemd-networkd[1586]: cali253581b1bee: Link UP Jul 12 00:26:43.163378 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali253581b1bee: link becomes ready Jul 12 00:26:43.162882 systemd-networkd[1586]: cali253581b1bee: Gained carrier Jul 12 00:26:43.197705 env[1913]: 2025-07-12 00:26:42.777 [INFO][5080] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--120-k8s-calico--kube--controllers--b9c4d9bf9--swqxk-eth0 calico-kube-controllers-b9c4d9bf9- calico-system 0398604d-88a0-41c4-996f-ea9a3a6c7de4 1013 0 2025-07-12 00:26:14 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:b9c4d9bf9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-29-120 calico-kube-controllers-b9c4d9bf9-swqxk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali253581b1bee [] [] }} ContainerID="d3ab911008595b5f310545caa2e0ebbb4f7225447171e7cd4e62d1efee834b32" Namespace="calico-system" Pod="calico-kube-controllers-b9c4d9bf9-swqxk" WorkloadEndpoint="ip--172--31--29--120-k8s-calico--kube--controllers--b9c4d9bf9--swqxk-" Jul 12 00:26:43.197705 env[1913]: 2025-07-12 00:26:42.777 [INFO][5080] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d3ab911008595b5f310545caa2e0ebbb4f7225447171e7cd4e62d1efee834b32" Namespace="calico-system" Pod="calico-kube-controllers-b9c4d9bf9-swqxk" WorkloadEndpoint="ip--172--31--29--120-k8s-calico--kube--controllers--b9c4d9bf9--swqxk-eth0" Jul 12 00:26:43.197705 env[1913]: 2025-07-12 00:26:43.003 [INFO][5110] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d3ab911008595b5f310545caa2e0ebbb4f7225447171e7cd4e62d1efee834b32" HandleID="k8s-pod-network.d3ab911008595b5f310545caa2e0ebbb4f7225447171e7cd4e62d1efee834b32" Workload="ip--172--31--29--120-k8s-calico--kube--controllers--b9c4d9bf9--swqxk-eth0" Jul 12 00:26:43.197705 env[1913]: 2025-07-12 00:26:43.003 [INFO][5110] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d3ab911008595b5f310545caa2e0ebbb4f7225447171e7cd4e62d1efee834b32" HandleID="k8s-pod-network.d3ab911008595b5f310545caa2e0ebbb4f7225447171e7cd4e62d1efee834b32" Workload="ip--172--31--29--120-k8s-calico--kube--controllers--b9c4d9bf9--swqxk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000307930), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-120", "pod":"calico-kube-controllers-b9c4d9bf9-swqxk", "timestamp":"2025-07-12 00:26:43.003283098 +0000 UTC"}, Hostname:"ip-172-31-29-120", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:26:43.197705 env[1913]: 2025-07-12 00:26:43.003 [INFO][5110] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:26:43.197705 env[1913]: 2025-07-12 00:26:43.003 [INFO][5110] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:26:43.197705 env[1913]: 2025-07-12 00:26:43.003 [INFO][5110] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-120' Jul 12 00:26:43.197705 env[1913]: 2025-07-12 00:26:43.042 [INFO][5110] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d3ab911008595b5f310545caa2e0ebbb4f7225447171e7cd4e62d1efee834b32" host="ip-172-31-29-120" Jul 12 00:26:43.197705 env[1913]: 2025-07-12 00:26:43.051 [INFO][5110] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-120" Jul 12 00:26:43.197705 env[1913]: 2025-07-12 00:26:43.059 [INFO][5110] ipam/ipam.go 511: Trying affinity for 192.168.107.192/26 host="ip-172-31-29-120" Jul 12 00:26:43.197705 env[1913]: 2025-07-12 00:26:43.063 [INFO][5110] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.192/26 host="ip-172-31-29-120" Jul 12 00:26:43.197705 env[1913]: 2025-07-12 00:26:43.069 [INFO][5110] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.192/26 host="ip-172-31-29-120" Jul 12 00:26:43.197705 env[1913]: 2025-07-12 00:26:43.070 [INFO][5110] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.192/26 handle="k8s-pod-network.d3ab911008595b5f310545caa2e0ebbb4f7225447171e7cd4e62d1efee834b32" host="ip-172-31-29-120" Jul 12 00:26:43.197705 env[1913]: 2025-07-12 00:26:43.073 [INFO][5110] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d3ab911008595b5f310545caa2e0ebbb4f7225447171e7cd4e62d1efee834b32 Jul 12 00:26:43.197705 env[1913]: 2025-07-12 00:26:43.087 [INFO][5110] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.192/26 handle="k8s-pod-network.d3ab911008595b5f310545caa2e0ebbb4f7225447171e7cd4e62d1efee834b32" host="ip-172-31-29-120" Jul 12 00:26:43.197705 env[1913]: 2025-07-12 00:26:43.108 [INFO][5110] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.199/26] block=192.168.107.192/26 handle="k8s-pod-network.d3ab911008595b5f310545caa2e0ebbb4f7225447171e7cd4e62d1efee834b32" host="ip-172-31-29-120" Jul 12 00:26:43.197705 env[1913]: 2025-07-12 00:26:43.108 [INFO][5110] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.199/26] handle="k8s-pod-network.d3ab911008595b5f310545caa2e0ebbb4f7225447171e7cd4e62d1efee834b32" host="ip-172-31-29-120" Jul 12 00:26:43.197705 env[1913]: 2025-07-12 00:26:43.108 [INFO][5110] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:26:43.197705 env[1913]: 2025-07-12 00:26:43.108 [INFO][5110] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.199/26] IPv6=[] ContainerID="d3ab911008595b5f310545caa2e0ebbb4f7225447171e7cd4e62d1efee834b32" HandleID="k8s-pod-network.d3ab911008595b5f310545caa2e0ebbb4f7225447171e7cd4e62d1efee834b32" Workload="ip--172--31--29--120-k8s-calico--kube--controllers--b9c4d9bf9--swqxk-eth0" Jul 12 00:26:43.201404 env[1913]: 2025-07-12 00:26:43.125 [INFO][5080] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d3ab911008595b5f310545caa2e0ebbb4f7225447171e7cd4e62d1efee834b32" Namespace="calico-system" Pod="calico-kube-controllers-b9c4d9bf9-swqxk" WorkloadEndpoint="ip--172--31--29--120-k8s-calico--kube--controllers--b9c4d9bf9--swqxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--120-k8s-calico--kube--controllers--b9c4d9bf9--swqxk-eth0", GenerateName:"calico-kube-controllers-b9c4d9bf9-", Namespace:"calico-system", SelfLink:"", UID:"0398604d-88a0-41c4-996f-ea9a3a6c7de4", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b9c4d9bf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-120", ContainerID:"", Pod:"calico-kube-controllers-b9c4d9bf9-swqxk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.107.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali253581b1bee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:26:43.201404 env[1913]: 2025-07-12 00:26:43.125 [INFO][5080] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.199/32] ContainerID="d3ab911008595b5f310545caa2e0ebbb4f7225447171e7cd4e62d1efee834b32" Namespace="calico-system" Pod="calico-kube-controllers-b9c4d9bf9-swqxk" WorkloadEndpoint="ip--172--31--29--120-k8s-calico--kube--controllers--b9c4d9bf9--swqxk-eth0" Jul 12 00:26:43.201404 env[1913]: 2025-07-12 00:26:43.125 [INFO][5080] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali253581b1bee ContainerID="d3ab911008595b5f310545caa2e0ebbb4f7225447171e7cd4e62d1efee834b32" Namespace="calico-system" Pod="calico-kube-controllers-b9c4d9bf9-swqxk" WorkloadEndpoint="ip--172--31--29--120-k8s-calico--kube--controllers--b9c4d9bf9--swqxk-eth0" Jul 12 00:26:43.201404 env[1913]: 2025-07-12 00:26:43.167 [INFO][5080] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d3ab911008595b5f310545caa2e0ebbb4f7225447171e7cd4e62d1efee834b32" Namespace="calico-system" Pod="calico-kube-controllers-b9c4d9bf9-swqxk" WorkloadEndpoint="ip--172--31--29--120-k8s-calico--kube--controllers--b9c4d9bf9--swqxk-eth0" Jul 12 00:26:43.201404 env[1913]: 2025-07-12 00:26:43.168 [INFO][5080] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d3ab911008595b5f310545caa2e0ebbb4f7225447171e7cd4e62d1efee834b32" Namespace="calico-system" Pod="calico-kube-controllers-b9c4d9bf9-swqxk" WorkloadEndpoint="ip--172--31--29--120-k8s-calico--kube--controllers--b9c4d9bf9--swqxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--120-k8s-calico--kube--controllers--b9c4d9bf9--swqxk-eth0", GenerateName:"calico-kube-controllers-b9c4d9bf9-", Namespace:"calico-system", SelfLink:"", UID:"0398604d-88a0-41c4-996f-ea9a3a6c7de4", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b9c4d9bf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-120", ContainerID:"d3ab911008595b5f310545caa2e0ebbb4f7225447171e7cd4e62d1efee834b32", Pod:"calico-kube-controllers-b9c4d9bf9-swqxk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.107.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali253581b1bee", MAC:"2a:3d:d4:0b:69:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:26:43.201404 env[1913]: 2025-07-12 00:26:43.187 [INFO][5080] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d3ab911008595b5f310545caa2e0ebbb4f7225447171e7cd4e62d1efee834b32" Namespace="calico-system" Pod="calico-kube-controllers-b9c4d9bf9-swqxk" WorkloadEndpoint="ip--172--31--29--120-k8s-calico--kube--controllers--b9c4d9bf9--swqxk-eth0" Jul 12 00:26:43.243000 audit[5174]: NETFILTER_CFG table=filter:120 family=2 entries=52 op=nft_register_chain pid=5174 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 12 00:26:43.243000 audit[5174]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24312 a0=3 a1=ffffc9f18220 a2=0 a3=ffff979f2fa8 items=0 ppid=4315 pid=5174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:43.243000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 12 00:26:43.310468 env[1913]: time="2025-07-12T00:26:43.310303525Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:26:43.310468 env[1913]: time="2025-07-12T00:26:43.310390010Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:26:43.329569 env[1913]: time="2025-07-12T00:26:43.310430607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:26:43.329569 env[1913]: time="2025-07-12T00:26:43.312417051Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3ab911008595b5f310545caa2e0ebbb4f7225447171e7cd4e62d1efee834b32 pid=5181 runtime=io.containerd.runc.v2 Jul 12 00:26:43.332964 env[1913]: time="2025-07-12T00:26:43.332555154Z" level=info msg="StopPodSandbox for \"df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1\"" Jul 12 00:26:43.357820 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali82a774ccf3a: link becomes ready Jul 12 00:26:43.362139 systemd-networkd[1586]: cali82a774ccf3a: Link UP Jul 12 00:26:43.362564 systemd-networkd[1586]: cali82a774ccf3a: Gained carrier Jul 12 00:26:43.392356 env[1913]: 2025-07-12 00:26:42.921 [INFO][5079] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--120-k8s-goldmane--58fd7646b9--p759q-eth0 goldmane-58fd7646b9- calico-system 213fb0de-6a80-4aa5-aeb1-a0af932ccfc6 1015 0 2025-07-12 00:26:13 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-29-120 goldmane-58fd7646b9-p759q eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali82a774ccf3a [] [] }} ContainerID="2e8195d46cf0920bcb43e69b01a439a8aeb2d7d2591cec18fa06f734311cd78a" Namespace="calico-system" Pod="goldmane-58fd7646b9-p759q" WorkloadEndpoint="ip--172--31--29--120-k8s-goldmane--58fd7646b9--p759q-" Jul 12 00:26:43.392356 env[1913]: 2025-07-12 00:26:42.921 [INFO][5079] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2e8195d46cf0920bcb43e69b01a439a8aeb2d7d2591cec18fa06f734311cd78a" Namespace="calico-system" Pod="goldmane-58fd7646b9-p759q" WorkloadEndpoint="ip--172--31--29--120-k8s-goldmane--58fd7646b9--p759q-eth0" Jul 12 00:26:43.392356 env[1913]: 2025-07-12 00:26:43.136 [INFO][5122] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e8195d46cf0920bcb43e69b01a439a8aeb2d7d2591cec18fa06f734311cd78a" HandleID="k8s-pod-network.2e8195d46cf0920bcb43e69b01a439a8aeb2d7d2591cec18fa06f734311cd78a" Workload="ip--172--31--29--120-k8s-goldmane--58fd7646b9--p759q-eth0" Jul 12 00:26:43.392356 env[1913]: 2025-07-12 00:26:43.139 [INFO][5122] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2e8195d46cf0920bcb43e69b01a439a8aeb2d7d2591cec18fa06f734311cd78a" HandleID="k8s-pod-network.2e8195d46cf0920bcb43e69b01a439a8aeb2d7d2591cec18fa06f734311cd78a" Workload="ip--172--31--29--120-k8s-goldmane--58fd7646b9--p759q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ca7b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-120", "pod":"goldmane-58fd7646b9-p759q", "timestamp":"2025-07-12 00:26:43.135996588 +0000 UTC"}, Hostname:"ip-172-31-29-120", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:26:43.392356 env[1913]: 2025-07-12 00:26:43.140 [INFO][5122] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:26:43.392356 env[1913]: 2025-07-12 00:26:43.140 [INFO][5122] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:26:43.392356 env[1913]: 2025-07-12 00:26:43.141 [INFO][5122] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-120' Jul 12 00:26:43.392356 env[1913]: 2025-07-12 00:26:43.189 [INFO][5122] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2e8195d46cf0920bcb43e69b01a439a8aeb2d7d2591cec18fa06f734311cd78a" host="ip-172-31-29-120" Jul 12 00:26:43.392356 env[1913]: 2025-07-12 00:26:43.214 [INFO][5122] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-120" Jul 12 00:26:43.392356 env[1913]: 2025-07-12 00:26:43.245 [INFO][5122] ipam/ipam.go 511: Trying affinity for 192.168.107.192/26 host="ip-172-31-29-120" Jul 12 00:26:43.392356 env[1913]: 2025-07-12 00:26:43.254 [INFO][5122] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.192/26 host="ip-172-31-29-120" Jul 12 00:26:43.392356 env[1913]: 2025-07-12 00:26:43.263 [INFO][5122] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.192/26 host="ip-172-31-29-120" Jul 12 00:26:43.392356 env[1913]: 2025-07-12 00:26:43.263 [INFO][5122] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.192/26 handle="k8s-pod-network.2e8195d46cf0920bcb43e69b01a439a8aeb2d7d2591cec18fa06f734311cd78a" host="ip-172-31-29-120" Jul 12 00:26:43.392356 env[1913]: 2025-07-12 00:26:43.270 [INFO][5122] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2e8195d46cf0920bcb43e69b01a439a8aeb2d7d2591cec18fa06f734311cd78a Jul 12 00:26:43.392356 env[1913]: 2025-07-12 00:26:43.282 [INFO][5122] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.192/26 handle="k8s-pod-network.2e8195d46cf0920bcb43e69b01a439a8aeb2d7d2591cec18fa06f734311cd78a" host="ip-172-31-29-120" Jul 12 00:26:43.392356 env[1913]: 2025-07-12 00:26:43.302 [INFO][5122] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.200/26] block=192.168.107.192/26 handle="k8s-pod-network.2e8195d46cf0920bcb43e69b01a439a8aeb2d7d2591cec18fa06f734311cd78a" host="ip-172-31-29-120" Jul 12 00:26:43.392356 env[1913]: 2025-07-12 00:26:43.302 [INFO][5122] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.200/26] handle="k8s-pod-network.2e8195d46cf0920bcb43e69b01a439a8aeb2d7d2591cec18fa06f734311cd78a" host="ip-172-31-29-120" Jul 12 00:26:43.392356 env[1913]: 2025-07-12 00:26:43.302 [INFO][5122] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:26:43.392356 env[1913]: 2025-07-12 00:26:43.302 [INFO][5122] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.200/26] IPv6=[] ContainerID="2e8195d46cf0920bcb43e69b01a439a8aeb2d7d2591cec18fa06f734311cd78a" HandleID="k8s-pod-network.2e8195d46cf0920bcb43e69b01a439a8aeb2d7d2591cec18fa06f734311cd78a" Workload="ip--172--31--29--120-k8s-goldmane--58fd7646b9--p759q-eth0" Jul 12 00:26:43.394101 env[1913]: 2025-07-12 00:26:43.339 [INFO][5079] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2e8195d46cf0920bcb43e69b01a439a8aeb2d7d2591cec18fa06f734311cd78a" Namespace="calico-system" Pod="goldmane-58fd7646b9-p759q" WorkloadEndpoint="ip--172--31--29--120-k8s-goldmane--58fd7646b9--p759q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--120-k8s-goldmane--58fd7646b9--p759q-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"213fb0de-6a80-4aa5-aeb1-a0af932ccfc6", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-120", ContainerID:"", Pod:"goldmane-58fd7646b9-p759q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.107.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali82a774ccf3a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:26:43.394101 env[1913]: 2025-07-12 00:26:43.339 [INFO][5079] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.200/32] ContainerID="2e8195d46cf0920bcb43e69b01a439a8aeb2d7d2591cec18fa06f734311cd78a" Namespace="calico-system" Pod="goldmane-58fd7646b9-p759q" WorkloadEndpoint="ip--172--31--29--120-k8s-goldmane--58fd7646b9--p759q-eth0" Jul 12 00:26:43.394101 env[1913]: 2025-07-12 00:26:43.339 [INFO][5079] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali82a774ccf3a ContainerID="2e8195d46cf0920bcb43e69b01a439a8aeb2d7d2591cec18fa06f734311cd78a" Namespace="calico-system" Pod="goldmane-58fd7646b9-p759q" WorkloadEndpoint="ip--172--31--29--120-k8s-goldmane--58fd7646b9--p759q-eth0" Jul 12 00:26:43.394101 env[1913]: 2025-07-12 00:26:43.345 [INFO][5079] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e8195d46cf0920bcb43e69b01a439a8aeb2d7d2591cec18fa06f734311cd78a" Namespace="calico-system" Pod="goldmane-58fd7646b9-p759q" WorkloadEndpoint="ip--172--31--29--120-k8s-goldmane--58fd7646b9--p759q-eth0" Jul 12 00:26:43.394101 env[1913]: 2025-07-12 00:26:43.345 [INFO][5079] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2e8195d46cf0920bcb43e69b01a439a8aeb2d7d2591cec18fa06f734311cd78a" Namespace="calico-system" Pod="goldmane-58fd7646b9-p759q" WorkloadEndpoint="ip--172--31--29--120-k8s-goldmane--58fd7646b9--p759q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--120-k8s-goldmane--58fd7646b9--p759q-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"213fb0de-6a80-4aa5-aeb1-a0af932ccfc6", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-120", ContainerID:"2e8195d46cf0920bcb43e69b01a439a8aeb2d7d2591cec18fa06f734311cd78a", Pod:"goldmane-58fd7646b9-p759q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.107.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali82a774ccf3a", MAC:"42:1c:2c:49:bd:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:26:43.394101 env[1913]: 2025-07-12 00:26:43.368 [INFO][5079] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2e8195d46cf0920bcb43e69b01a439a8aeb2d7d2591cec18fa06f734311cd78a" Namespace="calico-system" Pod="goldmane-58fd7646b9-p759q" WorkloadEndpoint="ip--172--31--29--120-k8s-goldmane--58fd7646b9--p759q-eth0" Jul 12 00:26:43.460794 env[1913]: time="2025-07-12T00:26:43.455052236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:26:43.460794 env[1913]: time="2025-07-12T00:26:43.455139250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:26:43.460794 env[1913]: time="2025-07-12T00:26:43.455166238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:26:43.460794 env[1913]: time="2025-07-12T00:26:43.455741408Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2e8195d46cf0920bcb43e69b01a439a8aeb2d7d2591cec18fa06f734311cd78a pid=5234 runtime=io.containerd.runc.v2 Jul 12 00:26:43.460000 audit[5231]: NETFILTER_CFG table=filter:121 family=2 entries=64 op=nft_register_chain pid=5231 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 12 00:26:43.460000 audit[5231]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=31104 a0=3 a1=ffffcb70c340 a2=0 a3=ffffb0eaafa8 items=0 ppid=4315 pid=5231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:43.460000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 12 00:26:43.586340 env[1913]: time="2025-07-12T00:26:43.586283795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8494455ff7-gwch8,Uid:7051a960-7ce8-45f1-8249-f71049b41599,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2bce4a2ff669ed9ed27e9e09dd730c7cb2d117142ce4f2b40c355e3b5c893604\"" Jul 12 00:26:43.656913 systemd[1]: run-containerd-runc-k8s.io-2e8195d46cf0920bcb43e69b01a439a8aeb2d7d2591cec18fa06f734311cd78a-runc.U8np7K.mount: Deactivated successfully. Jul 12 00:26:43.816088 env[1913]: time="2025-07-12T00:26:43.816014392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b9c4d9bf9-swqxk,Uid:0398604d-88a0-41c4-996f-ea9a3a6c7de4,Namespace:calico-system,Attempt:1,} returns sandbox id \"d3ab911008595b5f310545caa2e0ebbb4f7225447171e7cd4e62d1efee834b32\"" Jul 12 00:26:43.833315 env[1913]: 2025-07-12 00:26:43.633 [WARNING][5220] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--120-k8s-coredns--7c65d6cfc9--6g88r-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"08c64d92-9452-47f2-8a8c-8837e4813c7d", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 25, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-120", ContainerID:"7f610f9b872029e3d2bb2673d5e3147c30df90e05f37546689a0bb39eb45e714", Pod:"coredns-7c65d6cfc9-6g88r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliecb817ff36d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:26:43.833315 env[1913]: 2025-07-12 00:26:43.633 [INFO][5220] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" Jul 12 00:26:43.833315 env[1913]: 2025-07-12 00:26:43.633 [INFO][5220] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" iface="eth0" netns="" Jul 12 00:26:43.833315 env[1913]: 2025-07-12 00:26:43.634 [INFO][5220] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" Jul 12 00:26:43.833315 env[1913]: 2025-07-12 00:26:43.634 [INFO][5220] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" Jul 12 00:26:43.833315 env[1913]: 2025-07-12 00:26:43.796 [INFO][5279] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" HandleID="k8s-pod-network.df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" Workload="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--6g88r-eth0" Jul 12 00:26:43.833315 env[1913]: 2025-07-12 00:26:43.796 [INFO][5279] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:26:43.833315 env[1913]: 2025-07-12 00:26:43.796 [INFO][5279] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:26:43.833315 env[1913]: 2025-07-12 00:26:43.812 [WARNING][5279] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" HandleID="k8s-pod-network.df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" Workload="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--6g88r-eth0" Jul 12 00:26:43.833315 env[1913]: 2025-07-12 00:26:43.812 [INFO][5279] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" HandleID="k8s-pod-network.df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" Workload="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--6g88r-eth0" Jul 12 00:26:43.833315 env[1913]: 2025-07-12 00:26:43.815 [INFO][5279] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:26:43.833315 env[1913]: 2025-07-12 00:26:43.828 [INFO][5220] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" Jul 12 00:26:43.834848 env[1913]: time="2025-07-12T00:26:43.834778722Z" level=info msg="TearDown network for sandbox \"df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1\" successfully" Jul 12 00:26:43.834848 env[1913]: time="2025-07-12T00:26:43.834840572Z" level=info msg="StopPodSandbox for \"df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1\" returns successfully" Jul 12 00:26:43.837281 env[1913]: time="2025-07-12T00:26:43.837149641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-p759q,Uid:213fb0de-6a80-4aa5-aeb1-a0af932ccfc6,Namespace:calico-system,Attempt:1,} returns sandbox id \"2e8195d46cf0920bcb43e69b01a439a8aeb2d7d2591cec18fa06f734311cd78a\"" Jul 12 00:26:43.837659 env[1913]: time="2025-07-12T00:26:43.837593277Z" level=info msg="RemovePodSandbox for \"df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1\"" Jul 12 00:26:43.837753 env[1913]: time="2025-07-12T00:26:43.837664451Z" level=info msg="Forcibly stopping sandbox \"df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1\"" Jul 12 00:26:44.070979 env[1913]: 2025-07-12 00:26:44.002 [WARNING][5312] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--120-k8s-coredns--7c65d6cfc9--6g88r-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"08c64d92-9452-47f2-8a8c-8837e4813c7d", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 25, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-120", ContainerID:"7f610f9b872029e3d2bb2673d5e3147c30df90e05f37546689a0bb39eb45e714", Pod:"coredns-7c65d6cfc9-6g88r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliecb817ff36d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:26:44.070979 env[1913]: 2025-07-12 00:26:44.003 [INFO][5312] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" Jul 12 00:26:44.070979 env[1913]: 2025-07-12 00:26:44.003 [INFO][5312] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" iface="eth0" netns="" Jul 12 00:26:44.070979 env[1913]: 2025-07-12 00:26:44.003 [INFO][5312] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" Jul 12 00:26:44.070979 env[1913]: 2025-07-12 00:26:44.003 [INFO][5312] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" Jul 12 00:26:44.070979 env[1913]: 2025-07-12 00:26:44.048 [INFO][5319] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" HandleID="k8s-pod-network.df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" Workload="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--6g88r-eth0" Jul 12 00:26:44.070979 env[1913]: 2025-07-12 00:26:44.048 [INFO][5319] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:26:44.070979 env[1913]: 2025-07-12 00:26:44.048 [INFO][5319] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:26:44.070979 env[1913]: 2025-07-12 00:26:44.061 [WARNING][5319] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" HandleID="k8s-pod-network.df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" Workload="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--6g88r-eth0" Jul 12 00:26:44.070979 env[1913]: 2025-07-12 00:26:44.061 [INFO][5319] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" HandleID="k8s-pod-network.df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" Workload="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--6g88r-eth0" Jul 12 00:26:44.070979 env[1913]: 2025-07-12 00:26:44.064 [INFO][5319] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:26:44.070979 env[1913]: 2025-07-12 00:26:44.067 [INFO][5312] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1" Jul 12 00:26:44.072014 env[1913]: time="2025-07-12T00:26:44.071962659Z" level=info msg="TearDown network for sandbox \"df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1\" successfully" Jul 12 00:26:44.078790 env[1913]: time="2025-07-12T00:26:44.078729362Z" level=info msg="RemovePodSandbox \"df522d059dce4797a582e7faae7a53b2fd9dd504948eebd59e2722f51d298fd1\" returns successfully" Jul 12 00:26:44.079735 env[1913]: time="2025-07-12T00:26:44.079688971Z" level=info msg="StopPodSandbox for \"d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8\"" Jul 12 00:26:44.255388 env[1913]: 2025-07-12 00:26:44.167 [WARNING][5333] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--bxk4f-eth0", GenerateName:"calico-apiserver-8494455ff7-", Namespace:"calico-apiserver", SelfLink:"", UID:"c1b2666c-1d6f-4ba9-9d83-e51550e0fc3d", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8494455ff7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-120", ContainerID:"734b74e9a2fff41ce9219ca962abe2d7890ebbf64b1024712ba431fd3511eed1", Pod:"calico-apiserver-8494455ff7-bxk4f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali93d6a4cb797", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:26:44.255388 env[1913]: 2025-07-12 00:26:44.168 [INFO][5333] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" Jul 12 00:26:44.255388 env[1913]: 2025-07-12 00:26:44.168 [INFO][5333] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" iface="eth0" netns="" Jul 12 00:26:44.255388 env[1913]: 2025-07-12 00:26:44.168 [INFO][5333] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" Jul 12 00:26:44.255388 env[1913]: 2025-07-12 00:26:44.168 [INFO][5333] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" Jul 12 00:26:44.255388 env[1913]: 2025-07-12 00:26:44.220 [INFO][5342] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" HandleID="k8s-pod-network.d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" Workload="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--bxk4f-eth0" Jul 12 00:26:44.255388 env[1913]: 2025-07-12 00:26:44.220 [INFO][5342] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:26:44.255388 env[1913]: 2025-07-12 00:26:44.220 [INFO][5342] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:26:44.255388 env[1913]: 2025-07-12 00:26:44.237 [WARNING][5342] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" HandleID="k8s-pod-network.d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" Workload="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--bxk4f-eth0" Jul 12 00:26:44.255388 env[1913]: 2025-07-12 00:26:44.237 [INFO][5342] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" HandleID="k8s-pod-network.d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" Workload="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--bxk4f-eth0" Jul 12 00:26:44.255388 env[1913]: 2025-07-12 00:26:44.240 [INFO][5342] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:26:44.255388 env[1913]: 2025-07-12 00:26:44.245 [INFO][5333] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" Jul 12 00:26:44.262363 env[1913]: time="2025-07-12T00:26:44.262300364Z" level=info msg="TearDown network for sandbox \"d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8\" successfully" Jul 12 00:26:44.262611 env[1913]: time="2025-07-12T00:26:44.262568929Z" level=info msg="StopPodSandbox for \"d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8\" returns successfully" Jul 12 00:26:44.263953 env[1913]: time="2025-07-12T00:26:44.263905848Z" level=info msg="RemovePodSandbox for \"d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8\"" Jul 12 00:26:44.264379 env[1913]: time="2025-07-12T00:26:44.264156689Z" level=info msg="Forcibly stopping sandbox \"d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8\"" Jul 12 00:26:44.371858 env[1913]: time="2025-07-12T00:26:44.370108343Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:44.374838 env[1913]: time="2025-07-12T00:26:44.374783529Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:44.378580 env[1913]: time="2025-07-12T00:26:44.378527211Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:44.382098 env[1913]: time="2025-07-12T00:26:44.382041209Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:44.383483 env[1913]: time="2025-07-12T00:26:44.383414465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 12 00:26:44.391777 env[1913]: time="2025-07-12T00:26:44.391714748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 12 00:26:44.398700 env[1913]: time="2025-07-12T00:26:44.398642818Z" level=info msg="CreateContainer within sandbox \"a62887f4ece0f5eddffbe23cddd5512151b7320ba36a356f9be735564ddc8fbc\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 12 00:26:44.438600 env[1913]: time="2025-07-12T00:26:44.438540076Z" level=info msg="CreateContainer within sandbox \"a62887f4ece0f5eddffbe23cddd5512151b7320ba36a356f9be735564ddc8fbc\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"feff590ef40ab028fbc5e93d8c492829ff1ce17ba3bc3d54370565869f6ec7b2\"" Jul 12 00:26:44.449920 env[1913]: 2025-07-12 00:26:44.345 [WARNING][5362] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--bxk4f-eth0", GenerateName:"calico-apiserver-8494455ff7-", Namespace:"calico-apiserver", SelfLink:"", UID:"c1b2666c-1d6f-4ba9-9d83-e51550e0fc3d", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8494455ff7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-120", ContainerID:"734b74e9a2fff41ce9219ca962abe2d7890ebbf64b1024712ba431fd3511eed1", Pod:"calico-apiserver-8494455ff7-bxk4f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali93d6a4cb797", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:26:44.449920 env[1913]: 2025-07-12 00:26:44.346 [INFO][5362] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" Jul 12 00:26:44.449920 env[1913]: 2025-07-12 00:26:44.346 [INFO][5362] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" iface="eth0" netns="" Jul 12 00:26:44.449920 env[1913]: 2025-07-12 00:26:44.346 [INFO][5362] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" Jul 12 00:26:44.449920 env[1913]: 2025-07-12 00:26:44.346 [INFO][5362] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" Jul 12 00:26:44.449920 env[1913]: 2025-07-12 00:26:44.407 [INFO][5369] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" HandleID="k8s-pod-network.d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" Workload="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--bxk4f-eth0" Jul 12 00:26:44.449920 env[1913]: 2025-07-12 00:26:44.407 [INFO][5369] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:26:44.449920 env[1913]: 2025-07-12 00:26:44.407 [INFO][5369] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:26:44.449920 env[1913]: 2025-07-12 00:26:44.430 [WARNING][5369] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" HandleID="k8s-pod-network.d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" Workload="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--bxk4f-eth0" Jul 12 00:26:44.449920 env[1913]: 2025-07-12 00:26:44.430 [INFO][5369] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" HandleID="k8s-pod-network.d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" Workload="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--bxk4f-eth0" Jul 12 00:26:44.449920 env[1913]: 2025-07-12 00:26:44.433 [INFO][5369] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:26:44.449920 env[1913]: 2025-07-12 00:26:44.437 [INFO][5362] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8" Jul 12 00:26:44.452483 env[1913]: time="2025-07-12T00:26:44.452400957Z" level=info msg="TearDown network for sandbox \"d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8\" successfully" Jul 12 00:26:44.455044 env[1913]: time="2025-07-12T00:26:44.454982022Z" level=info msg="StartContainer for \"feff590ef40ab028fbc5e93d8c492829ff1ce17ba3bc3d54370565869f6ec7b2\"" Jul 12 00:26:44.465924 env[1913]: time="2025-07-12T00:26:44.465857226Z" level=info msg="RemovePodSandbox \"d7b536c7e466a387ece98d9e186da271804791c65cb5ebce442834e918249cb8\" returns successfully" Jul 12 00:26:44.466671 env[1913]: time="2025-07-12T00:26:44.466625779Z" level=info msg="StopPodSandbox for \"70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f\"" Jul 12 00:26:44.473184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3585056030.mount: Deactivated successfully. Jul 12 00:26:44.538889 systemd[1]: run-containerd-runc-k8s.io-42b9d6dc0035ca32ad37cd639945c7b3f5acda86cdae0a63724ad380d59dcb45-runc.yKRfVi.mount: Deactivated successfully. Jul 12 00:26:44.841589 env[1913]: time="2025-07-12T00:26:44.841514495Z" level=info msg="StartContainer for \"feff590ef40ab028fbc5e93d8c492829ff1ce17ba3bc3d54370565869f6ec7b2\" returns successfully" Jul 12 00:26:44.910470 env[1913]: 2025-07-12 00:26:44.804 [WARNING][5411] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--120-k8s-coredns--7c65d6cfc9--msgjt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b25db915-a031-4109-9564-cc0834ce0083", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 25, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-120", ContainerID:"55d5be9e0599d4521e0853d793265afe3c6fd026f10dabfb9d1a1ed7b688f7be", Pod:"coredns-7c65d6cfc9-msgjt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic47b68d46fc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:26:44.910470 env[1913]: 2025-07-12 00:26:44.805 [INFO][5411] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" Jul 12 00:26:44.910470 env[1913]: 2025-07-12 00:26:44.805 [INFO][5411] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" iface="eth0" netns="" Jul 12 00:26:44.910470 env[1913]: 2025-07-12 00:26:44.805 [INFO][5411] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" Jul 12 00:26:44.910470 env[1913]: 2025-07-12 00:26:44.805 [INFO][5411] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" Jul 12 00:26:44.910470 env[1913]: 2025-07-12 00:26:44.876 [INFO][5442] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" HandleID="k8s-pod-network.70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" Workload="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--msgjt-eth0" Jul 12 00:26:44.910470 env[1913]: 2025-07-12 00:26:44.876 [INFO][5442] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:26:44.910470 env[1913]: 2025-07-12 00:26:44.876 [INFO][5442] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:26:44.910470 env[1913]: 2025-07-12 00:26:44.892 [WARNING][5442] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" HandleID="k8s-pod-network.70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" Workload="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--msgjt-eth0" Jul 12 00:26:44.910470 env[1913]: 2025-07-12 00:26:44.893 [INFO][5442] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" HandleID="k8s-pod-network.70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" Workload="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--msgjt-eth0" Jul 12 00:26:44.910470 env[1913]: 2025-07-12 00:26:44.898 [INFO][5442] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:26:44.910470 env[1913]: 2025-07-12 00:26:44.901 [INFO][5411] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" Jul 12 00:26:44.911562 env[1913]: time="2025-07-12T00:26:44.911491172Z" level=info msg="TearDown network for sandbox \"70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f\" successfully" Jul 12 00:26:44.911740 env[1913]: time="2025-07-12T00:26:44.911700791Z" level=info msg="StopPodSandbox for \"70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f\" returns successfully" Jul 12 00:26:44.912995 env[1913]: time="2025-07-12T00:26:44.912899264Z" level=info msg="RemovePodSandbox for \"70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f\"" Jul 12 00:26:44.913382 env[1913]: time="2025-07-12T00:26:44.913277139Z" level=info msg="Forcibly stopping sandbox \"70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f\"" Jul 12 00:26:44.917977 systemd-networkd[1586]: calic4941d131f2: Gained IPv6LL Jul 12 00:26:44.981969 systemd-networkd[1586]: cali82a774ccf3a: Gained IPv6LL Jul 12 00:26:45.046056 systemd-networkd[1586]: cali253581b1bee: Gained IPv6LL Jul 12 00:26:45.058000 audit[5469]: NETFILTER_CFG table=filter:122 family=2 entries=13 op=nft_register_rule pid=5469 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:45.058000 audit[5469]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffe8bbc9e0 a2=0 a3=1 items=0 ppid=3133 pid=5469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:45.058000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:45.070000 audit[5469]: NETFILTER_CFG table=nat:123 family=2 entries=27 op=nft_register_chain pid=5469 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:45.070000 audit[5469]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=9348 a0=3 a1=ffffe8bbc9e0 a2=0 a3=1 items=0 ppid=3133 pid=5469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:45.070000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:45.231847 env[1913]: 2025-07-12 00:26:45.075 [WARNING][5462] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--120-k8s-coredns--7c65d6cfc9--msgjt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b25db915-a031-4109-9564-cc0834ce0083", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 25, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-120", ContainerID:"55d5be9e0599d4521e0853d793265afe3c6fd026f10dabfb9d1a1ed7b688f7be", Pod:"coredns-7c65d6cfc9-msgjt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic47b68d46fc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:26:45.231847 env[1913]: 2025-07-12 00:26:45.076 [INFO][5462] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" Jul 12 00:26:45.231847 env[1913]: 2025-07-12 00:26:45.076 [INFO][5462] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" iface="eth0" netns="" Jul 12 00:26:45.231847 env[1913]: 2025-07-12 00:26:45.077 [INFO][5462] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" Jul 12 00:26:45.231847 env[1913]: 2025-07-12 00:26:45.077 [INFO][5462] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" Jul 12 00:26:45.231847 env[1913]: 2025-07-12 00:26:45.174 [INFO][5471] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" HandleID="k8s-pod-network.70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" Workload="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--msgjt-eth0" Jul 12 00:26:45.231847 env[1913]: 2025-07-12 00:26:45.175 [INFO][5471] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:26:45.231847 env[1913]: 2025-07-12 00:26:45.175 [INFO][5471] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:26:45.231847 env[1913]: 2025-07-12 00:26:45.216 [WARNING][5471] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" HandleID="k8s-pod-network.70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" Workload="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--msgjt-eth0" Jul 12 00:26:45.231847 env[1913]: 2025-07-12 00:26:45.216 [INFO][5471] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" HandleID="k8s-pod-network.70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" Workload="ip--172--31--29--120-k8s-coredns--7c65d6cfc9--msgjt-eth0" Jul 12 00:26:45.231847 env[1913]: 2025-07-12 00:26:45.220 [INFO][5471] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:26:45.231847 env[1913]: 2025-07-12 00:26:45.227 [INFO][5462] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f" Jul 12 00:26:45.233092 env[1913]: time="2025-07-12T00:26:45.233035564Z" level=info msg="TearDown network for sandbox \"70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f\" successfully" Jul 12 00:26:45.245653 env[1913]: time="2025-07-12T00:26:45.245553963Z" level=info msg="RemovePodSandbox \"70f885c13344010a4d384b1be399d666c6d0e2ad2ecaf66d9fb391b3f91d6e8f\" returns successfully" Jul 12 00:26:45.246856 env[1913]: time="2025-07-12T00:26:45.246796705Z" level=info msg="StopPodSandbox for \"f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04\"" Jul 12 00:26:45.301290 kubelet[2983]: I0712 00:26:45.299451 2983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7947b8c6c6-2mhn7" podStartSLOduration=2.5890222720000002 podStartE2EDuration="10.299427954s" podCreationTimestamp="2025-07-12 00:26:35 +0000 UTC" firstStartedPulling="2025-07-12 00:26:36.675801628 +0000 UTC m=+53.649002612" lastFinishedPulling="2025-07-12 00:26:44.386207298 +0000 UTC m=+61.359408294" observedRunningTime="2025-07-12 00:26:44.965364585 +0000 UTC m=+61.938565605" watchObservedRunningTime="2025-07-12 00:26:45.299427954 +0000 UTC m=+62.272628950" Jul 12 00:26:45.473469 systemd[1]: run-containerd-runc-k8s.io-feff590ef40ab028fbc5e93d8c492829ff1ce17ba3bc3d54370565869f6ec7b2-runc.E27fHP.mount: Deactivated successfully. Jul 12 00:26:45.509303 env[1913]: 2025-07-12 00:26:45.375 [WARNING][5487] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--120-k8s-csi--node--driver--g7wxf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"355545c7-e2b3-4e21-bab3-2e3ea1245fce", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-120", ContainerID:"89020aaf13edd8f4e41e6352c6e10f4894246d1f5f0a17682ab1687e04ee8af7", Pod:"csi-node-driver-g7wxf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.107.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib7ab39c51fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:26:45.509303 env[1913]: 2025-07-12 00:26:45.375 [INFO][5487] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" Jul 12 00:26:45.509303 env[1913]: 2025-07-12 00:26:45.375 [INFO][5487] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" iface="eth0" netns="" Jul 12 00:26:45.509303 env[1913]: 2025-07-12 00:26:45.375 [INFO][5487] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" Jul 12 00:26:45.509303 env[1913]: 2025-07-12 00:26:45.375 [INFO][5487] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" Jul 12 00:26:45.509303 env[1913]: 2025-07-12 00:26:45.467 [INFO][5494] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" HandleID="k8s-pod-network.f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" Workload="ip--172--31--29--120-k8s-csi--node--driver--g7wxf-eth0" Jul 12 00:26:45.509303 env[1913]: 2025-07-12 00:26:45.475 [INFO][5494] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:26:45.509303 env[1913]: 2025-07-12 00:26:45.475 [INFO][5494] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:26:45.509303 env[1913]: 2025-07-12 00:26:45.489 [WARNING][5494] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" HandleID="k8s-pod-network.f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" Workload="ip--172--31--29--120-k8s-csi--node--driver--g7wxf-eth0" Jul 12 00:26:45.509303 env[1913]: 2025-07-12 00:26:45.489 [INFO][5494] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" HandleID="k8s-pod-network.f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" Workload="ip--172--31--29--120-k8s-csi--node--driver--g7wxf-eth0" Jul 12 00:26:45.509303 env[1913]: 2025-07-12 00:26:45.500 [INFO][5494] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:26:45.509303 env[1913]: 2025-07-12 00:26:45.505 [INFO][5487] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" Jul 12 00:26:45.510820 env[1913]: time="2025-07-12T00:26:45.510767730Z" level=info msg="TearDown network for sandbox \"f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04\" successfully" Jul 12 00:26:45.510961 env[1913]: time="2025-07-12T00:26:45.510928581Z" level=info msg="StopPodSandbox for \"f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04\" returns successfully" Jul 12 00:26:45.512259 env[1913]: time="2025-07-12T00:26:45.512194435Z" level=info msg="RemovePodSandbox for \"f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04\"" Jul 12 00:26:45.512702 env[1913]: time="2025-07-12T00:26:45.512622050Z" level=info msg="Forcibly stopping sandbox \"f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04\"" Jul 12 00:26:45.675410 env[1913]: 2025-07-12 00:26:45.584 [WARNING][5511] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--120-k8s-csi--node--driver--g7wxf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"355545c7-e2b3-4e21-bab3-2e3ea1245fce", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-120", ContainerID:"89020aaf13edd8f4e41e6352c6e10f4894246d1f5f0a17682ab1687e04ee8af7", Pod:"csi-node-driver-g7wxf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.107.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib7ab39c51fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:26:45.675410 env[1913]: 2025-07-12 00:26:45.585 [INFO][5511] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" Jul 12 00:26:45.675410 env[1913]: 2025-07-12 00:26:45.585 [INFO][5511] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" iface="eth0" netns="" Jul 12 00:26:45.675410 env[1913]: 2025-07-12 00:26:45.585 [INFO][5511] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" Jul 12 00:26:45.675410 env[1913]: 2025-07-12 00:26:45.585 [INFO][5511] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" Jul 12 00:26:45.675410 env[1913]: 2025-07-12 00:26:45.640 [INFO][5518] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" HandleID="k8s-pod-network.f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" Workload="ip--172--31--29--120-k8s-csi--node--driver--g7wxf-eth0" Jul 12 00:26:45.675410 env[1913]: 2025-07-12 00:26:45.641 [INFO][5518] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:26:45.675410 env[1913]: 2025-07-12 00:26:45.641 [INFO][5518] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:26:45.675410 env[1913]: 2025-07-12 00:26:45.662 [WARNING][5518] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" HandleID="k8s-pod-network.f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" Workload="ip--172--31--29--120-k8s-csi--node--driver--g7wxf-eth0" Jul 12 00:26:45.675410 env[1913]: 2025-07-12 00:26:45.665 [INFO][5518] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" HandleID="k8s-pod-network.f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" Workload="ip--172--31--29--120-k8s-csi--node--driver--g7wxf-eth0" Jul 12 00:26:45.675410 env[1913]: 2025-07-12 00:26:45.668 [INFO][5518] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:26:45.675410 env[1913]: 2025-07-12 00:26:45.671 [INFO][5511] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04" Jul 12 00:26:45.676400 env[1913]: time="2025-07-12T00:26:45.675457999Z" level=info msg="TearDown network for sandbox \"f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04\" successfully" Jul 12 00:26:45.681932 env[1913]: time="2025-07-12T00:26:45.681854469Z" level=info msg="RemovePodSandbox \"f15f8411bc0e0745794a6488651b43340098e3fe8cd6e2b3c1e2a2abf2ed7d04\" returns successfully" Jul 12 00:26:45.682835 env[1913]: time="2025-07-12T00:26:45.682762717Z" level=info msg="StopPodSandbox for \"0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6\"" Jul 12 00:26:45.846500 env[1913]: 2025-07-12 00:26:45.763 [WARNING][5534] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" WorkloadEndpoint="ip--172--31--29--120-k8s-whisker--5f777c54c6--9qb2c-eth0" Jul 12 00:26:45.846500 env[1913]: 2025-07-12 00:26:45.764 [INFO][5534] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" Jul 12 00:26:45.846500 env[1913]: 2025-07-12 00:26:45.764 [INFO][5534] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" iface="eth0" netns="" Jul 12 00:26:45.846500 env[1913]: 2025-07-12 00:26:45.764 [INFO][5534] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" Jul 12 00:26:45.846500 env[1913]: 2025-07-12 00:26:45.764 [INFO][5534] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" Jul 12 00:26:45.846500 env[1913]: 2025-07-12 00:26:45.822 [INFO][5541] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" HandleID="k8s-pod-network.0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" Workload="ip--172--31--29--120-k8s-whisker--5f777c54c6--9qb2c-eth0" Jul 12 00:26:45.846500 env[1913]: 2025-07-12 00:26:45.823 [INFO][5541] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:26:45.846500 env[1913]: 2025-07-12 00:26:45.823 [INFO][5541] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:26:45.846500 env[1913]: 2025-07-12 00:26:45.835 [WARNING][5541] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" HandleID="k8s-pod-network.0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" Workload="ip--172--31--29--120-k8s-whisker--5f777c54c6--9qb2c-eth0" Jul 12 00:26:45.846500 env[1913]: 2025-07-12 00:26:45.835 [INFO][5541] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" HandleID="k8s-pod-network.0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" Workload="ip--172--31--29--120-k8s-whisker--5f777c54c6--9qb2c-eth0" Jul 12 00:26:45.846500 env[1913]: 2025-07-12 00:26:45.839 [INFO][5541] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:26:45.846500 env[1913]: 2025-07-12 00:26:45.842 [INFO][5534] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" Jul 12 00:26:45.847469 env[1913]: time="2025-07-12T00:26:45.847415402Z" level=info msg="TearDown network for sandbox \"0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6\" successfully" Jul 12 00:26:45.847616 env[1913]: time="2025-07-12T00:26:45.847582709Z" level=info msg="StopPodSandbox for \"0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6\" returns successfully" Jul 12 00:26:45.848497 env[1913]: time="2025-07-12T00:26:45.848450107Z" level=info msg="RemovePodSandbox for \"0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6\"" Jul 12 00:26:45.848733 env[1913]: time="2025-07-12T00:26:45.848671775Z" level=info msg="Forcibly stopping sandbox \"0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6\"" Jul 12 00:26:46.030759 env[1913]: time="2025-07-12T00:26:46.030705715Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:46.037480 env[1913]: time="2025-07-12T00:26:46.037427796Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:46.042193 env[1913]: time="2025-07-12T00:26:46.042140743Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:46.052635 env[1913]: time="2025-07-12T00:26:46.052574294Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:46.055522 env[1913]: time="2025-07-12T00:26:46.054404409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 12 00:26:46.063407 env[1913]: 2025-07-12 00:26:45.968 [WARNING][5557] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" WorkloadEndpoint="ip--172--31--29--120-k8s-whisker--5f777c54c6--9qb2c-eth0" Jul 12 00:26:46.063407 env[1913]: 2025-07-12 00:26:45.969 [INFO][5557] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" Jul 12 00:26:46.063407 env[1913]: 2025-07-12 00:26:45.969 [INFO][5557] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" iface="eth0" netns="" Jul 12 00:26:46.063407 env[1913]: 2025-07-12 00:26:45.969 [INFO][5557] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" Jul 12 00:26:46.063407 env[1913]: 2025-07-12 00:26:45.969 [INFO][5557] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" Jul 12 00:26:46.063407 env[1913]: 2025-07-12 00:26:46.022 [INFO][5564] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" HandleID="k8s-pod-network.0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" Workload="ip--172--31--29--120-k8s-whisker--5f777c54c6--9qb2c-eth0" Jul 12 00:26:46.063407 env[1913]: 2025-07-12 00:26:46.022 [INFO][5564] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:26:46.063407 env[1913]: 2025-07-12 00:26:46.022 [INFO][5564] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:26:46.063407 env[1913]: 2025-07-12 00:26:46.039 [WARNING][5564] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" HandleID="k8s-pod-network.0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" Workload="ip--172--31--29--120-k8s-whisker--5f777c54c6--9qb2c-eth0" Jul 12 00:26:46.063407 env[1913]: 2025-07-12 00:26:46.040 [INFO][5564] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" HandleID="k8s-pod-network.0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" Workload="ip--172--31--29--120-k8s-whisker--5f777c54c6--9qb2c-eth0" Jul 12 00:26:46.063407 env[1913]: 2025-07-12 00:26:46.045 [INFO][5564] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:26:46.063407 env[1913]: 2025-07-12 00:26:46.057 [INFO][5557] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6" Jul 12 00:26:46.064488 env[1913]: time="2025-07-12T00:26:46.064438270Z" level=info msg="TearDown network for sandbox \"0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6\" successfully" Jul 12 00:26:46.065706 env[1913]: time="2025-07-12T00:26:46.065624574Z" level=info msg="CreateContainer within sandbox \"89020aaf13edd8f4e41e6352c6e10f4894246d1f5f0a17682ab1687e04ee8af7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 12 00:26:46.066067 env[1913]: time="2025-07-12T00:26:46.064583100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 00:26:46.073577 env[1913]: time="2025-07-12T00:26:46.073511970Z" level=info msg="RemovePodSandbox \"0a59b27c33e784827cfaea05344f8745e484b0beb5dbf3cb03b180225c393aa6\" returns successfully" Jul 12 00:26:46.102507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1514575685.mount: Deactivated successfully. Jul 12 00:26:46.113673 env[1913]: time="2025-07-12T00:26:46.113589428Z" level=info msg="CreateContainer within sandbox \"89020aaf13edd8f4e41e6352c6e10f4894246d1f5f0a17682ab1687e04ee8af7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"599f1f8244417b44385e2b14b59d73b73d29d61f4dedf8b551d1db6f71287aa0\"" Jul 12 00:26:46.117757 env[1913]: time="2025-07-12T00:26:46.117682936Z" level=info msg="StartContainer for \"599f1f8244417b44385e2b14b59d73b73d29d61f4dedf8b551d1db6f71287aa0\"" Jul 12 00:26:46.257274 env[1913]: time="2025-07-12T00:26:46.257184693Z" level=info msg="StartContainer for \"599f1f8244417b44385e2b14b59d73b73d29d61f4dedf8b551d1db6f71287aa0\" returns successfully" Jul 12 00:26:46.473997 systemd[1]: run-containerd-runc-k8s.io-599f1f8244417b44385e2b14b59d73b73d29d61f4dedf8b551d1db6f71287aa0-runc.hN4N7k.mount: Deactivated successfully. Jul 12 00:26:47.586789 kernel: kauditd_printk_skb: 21 callbacks suppressed Jul 12 00:26:47.586978 kernel: audit: type=1130 audit(1752280007.573:435): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.29.120:22-147.75.109.163:41930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:47.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.29.120:22-147.75.109.163:41930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:47.574718 systemd[1]: Started sshd@8-172.31.29.120:22-147.75.109.163:41930.service. Jul 12 00:26:47.770000 audit[5605]: USER_ACCT pid=5605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:47.775994 sshd[5605]: Accepted publickey for core from 147.75.109.163 port 41930 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:26:47.777115 sshd[5605]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:26:47.770000 audit[5605]: CRED_ACQ pid=5605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:47.792598 kernel: audit: type=1101 audit(1752280007.770:436): pid=5605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:47.792770 kernel: audit: type=1103 audit(1752280007.770:437): pid=5605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:47.799237 kernel: audit: type=1006 audit(1752280007.770:438): pid=5605 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jul 12 00:26:47.770000 audit[5605]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcf4784d0 a2=3 a3=1 items=0 ppid=1 pid=5605 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:47.809775 kernel: audit: type=1300 audit(1752280007.770:438): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcf4784d0 a2=3 a3=1 items=0 ppid=1 pid=5605 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:47.770000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:26:47.815308 kernel: audit: type=1327 audit(1752280007.770:438): proctitle=737368643A20636F7265205B707269765D Jul 12 00:26:47.816682 systemd-logind[1905]: New session 9 of user core. Jul 12 00:26:47.817465 systemd[1]: Started session-9.scope. Jul 12 00:26:47.829000 audit[5605]: USER_START pid=5605 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:47.842000 audit[5608]: CRED_ACQ pid=5608 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:47.854859 kernel: audit: type=1105 audit(1752280007.829:439): pid=5605 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:47.854992 kernel: audit: type=1103 audit(1752280007.842:440): pid=5608 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:48.124666 sshd[5605]: pam_unix(sshd:session): session closed for user core Jul 12 00:26:48.125000 audit[5605]: USER_END pid=5605 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:48.138640 systemd[1]: sshd@8-172.31.29.120:22-147.75.109.163:41930.service: Deactivated successfully. Jul 12 00:26:48.141419 kernel: audit: type=1106 audit(1752280008.125:441): pid=5605 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:48.141911 kernel: audit: type=1104 audit(1752280008.125:442): pid=5605 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:48.125000 audit[5605]: CRED_DISP pid=5605 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:48.140674 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 00:26:48.140764 systemd-logind[1905]: Session 9 logged out. Waiting for processes to exit. Jul 12 00:26:48.152728 systemd-logind[1905]: Removed session 9. Jul 12 00:26:48.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.29.120:22-147.75.109.163:41930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:49.083742 env[1913]: time="2025-07-12T00:26:49.083673272Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:49.090619 env[1913]: time="2025-07-12T00:26:49.090567920Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:49.095766 env[1913]: time="2025-07-12T00:26:49.095700196Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:49.099120 env[1913]: time="2025-07-12T00:26:49.099053577Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:49.100587 env[1913]: time="2025-07-12T00:26:49.100528256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 12 00:26:49.106483 env[1913]: time="2025-07-12T00:26:49.104775327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 00:26:49.110108 env[1913]: time="2025-07-12T00:26:49.110048449Z" level=info msg="CreateContainer within sandbox \"734b74e9a2fff41ce9219ca962abe2d7890ebbf64b1024712ba431fd3511eed1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:26:49.149009 env[1913]: time="2025-07-12T00:26:49.148819473Z" level=info msg="CreateContainer within sandbox \"734b74e9a2fff41ce9219ca962abe2d7890ebbf64b1024712ba431fd3511eed1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1ed46191daeafeb5fdab6ed628945c0088646c5e2300ccf24998466fca50551c\"" Jul 12 00:26:49.152935 env[1913]: time="2025-07-12T00:26:49.152876545Z" level=info msg="StartContainer for \"1ed46191daeafeb5fdab6ed628945c0088646c5e2300ccf24998466fca50551c\"" Jul 12 00:26:49.232361 systemd[1]: run-containerd-runc-k8s.io-1ed46191daeafeb5fdab6ed628945c0088646c5e2300ccf24998466fca50551c-runc.B6Z4db.mount: Deactivated successfully. Jul 12 00:26:49.332073 env[1913]: time="2025-07-12T00:26:49.331966033Z" level=info msg="StartContainer for \"1ed46191daeafeb5fdab6ed628945c0088646c5e2300ccf24998466fca50551c\" returns successfully" Jul 12 00:26:49.432089 env[1913]: time="2025-07-12T00:26:49.432021526Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:49.438612 env[1913]: time="2025-07-12T00:26:49.438538960Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:49.442659 env[1913]: time="2025-07-12T00:26:49.442605236Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:49.446212 env[1913]: time="2025-07-12T00:26:49.446142688Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:49.448038 env[1913]: time="2025-07-12T00:26:49.447970364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 12 00:26:49.455371 env[1913]: time="2025-07-12T00:26:49.455305803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 12 00:26:49.456778 env[1913]: time="2025-07-12T00:26:49.456721178Z" level=info msg="CreateContainer within sandbox \"2bce4a2ff669ed9ed27e9e09dd730c7cb2d117142ce4f2b40c355e3b5c893604\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:26:49.483944 env[1913]: time="2025-07-12T00:26:49.483882968Z" level=info msg="CreateContainer within sandbox \"2bce4a2ff669ed9ed27e9e09dd730c7cb2d117142ce4f2b40c355e3b5c893604\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"87eb34a92a53bfc8791560c9b35a51287a8b2efa274d48eea8c70f1b2e40c7a7\"" Jul 12 00:26:49.486070 env[1913]: time="2025-07-12T00:26:49.484943904Z" level=info msg="StartContainer for \"87eb34a92a53bfc8791560c9b35a51287a8b2efa274d48eea8c70f1b2e40c7a7\"" Jul 12 00:26:49.672290 env[1913]: time="2025-07-12T00:26:49.672006558Z" level=info msg="StartContainer for \"87eb34a92a53bfc8791560c9b35a51287a8b2efa274d48eea8c70f1b2e40c7a7\" returns successfully" Jul 12 00:26:50.023838 kubelet[2983]: I0712 00:26:50.023736 2983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8494455ff7-bxk4f" podStartSLOduration=41.077777976 podStartE2EDuration="48.023692609s" podCreationTimestamp="2025-07-12 00:26:02 +0000 UTC" firstStartedPulling="2025-07-12 00:26:42.157150115 +0000 UTC m=+59.130351111" lastFinishedPulling="2025-07-12 00:26:49.103064748 +0000 UTC m=+66.076265744" observedRunningTime="2025-07-12 00:26:49.999210373 +0000 UTC m=+66.972411381" watchObservedRunningTime="2025-07-12 00:26:50.023692609 +0000 UTC m=+66.996893605" Jul 12 00:26:50.068000 audit[5708]: NETFILTER_CFG table=filter:124 family=2 entries=12 op=nft_register_rule pid=5708 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:50.068000 audit[5708]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=fffff6416780 a2=0 a3=1 items=0 ppid=3133 pid=5708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:50.068000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:50.076000 audit[5708]: NETFILTER_CFG table=nat:125 family=2 entries=22 op=nft_register_rule pid=5708 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:50.076000 audit[5708]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=fffff6416780 a2=0 a3=1 items=0 ppid=3133 pid=5708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:50.076000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:50.110000 audit[5710]: NETFILTER_CFG table=filter:126 family=2 entries=12 op=nft_register_rule pid=5710 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:50.110000 audit[5710]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffec39bd40 a2=0 a3=1 items=0 ppid=3133 pid=5710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:50.110000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:50.119000 audit[5710]: NETFILTER_CFG table=nat:127 family=2 entries=22 op=nft_register_rule pid=5710 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:50.119000 audit[5710]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffec39bd40 a2=0 a3=1 items=0 ppid=3133 pid=5710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:50.119000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:51.996067 kubelet[2983]: I0712 00:26:51.996008 2983 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:26:51.996876 kubelet[2983]: I0712 00:26:51.996672 2983 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:26:52.196982 kubelet[2983]: I0712 00:26:52.196436 2983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8494455ff7-gwch8" podStartSLOduration=44.336893736 podStartE2EDuration="50.196391074s" podCreationTimestamp="2025-07-12 00:26:02 +0000 UTC" firstStartedPulling="2025-07-12 00:26:43.591399872 +0000 UTC m=+60.564600868" lastFinishedPulling="2025-07-12 00:26:49.450897222 +0000 UTC m=+66.424098206" observedRunningTime="2025-07-12 00:26:50.024210129 +0000 UTC m=+66.997411173" watchObservedRunningTime="2025-07-12 00:26:52.196391074 +0000 UTC m=+69.169592070" Jul 12 00:26:52.313000 audit[5712]: NETFILTER_CFG table=filter:128 family=2 entries=11 op=nft_register_rule pid=5712 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:52.313000 audit[5712]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffc6d55350 a2=0 a3=1 items=0 ppid=3133 pid=5712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:52.313000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:52.321000 audit[5712]: NETFILTER_CFG table=nat:129 family=2 entries=29 op=nft_register_chain pid=5712 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:52.321000 audit[5712]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10116 a0=3 a1=ffffc6d55350 a2=0 a3=1 items=0 ppid=3133 pid=5712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:52.321000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:53.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.29.120:22-147.75.109.163:41946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:53.150599 systemd[1]: Started sshd@9-172.31.29.120:22-147.75.109.163:41946.service. Jul 12 00:26:53.153066 kernel: kauditd_printk_skb: 19 callbacks suppressed Jul 12 00:26:53.153179 kernel: audit: type=1130 audit(1752280013.149:450): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.29.120:22-147.75.109.163:41946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:53.169000 audit[5715]: NETFILTER_CFG table=filter:130 family=2 entries=10 op=nft_register_rule pid=5715 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:53.169000 audit[5715]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffdedb0a10 a2=0 a3=1 items=0 ppid=3133 pid=5715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:53.189331 kernel: audit: type=1325 audit(1752280013.169:451): table=filter:130 family=2 entries=10 op=nft_register_rule pid=5715 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:53.189477 kernel: audit: type=1300 audit(1752280013.169:451): arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffdedb0a10 a2=0 a3=1 items=0 ppid=3133 pid=5715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:53.169000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:53.199105 kernel: audit: type=1327 audit(1752280013.169:451): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:53.191000 audit[5715]: NETFILTER_CFG table=nat:131 family=2 entries=36 op=nft_register_chain pid=5715 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:53.205448 kernel: audit: type=1325 audit(1752280013.191:452): table=nat:131 family=2 entries=36 op=nft_register_chain pid=5715 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:53.191000 audit[5715]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=12004 a0=3 a1=ffffdedb0a10 a2=0 a3=1 items=0 ppid=3133 pid=5715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:53.222401 kernel: audit: type=1300 audit(1752280013.191:452): arch=c00000b7 syscall=211 success=yes exit=12004 a0=3 a1=ffffdedb0a10 a2=0 a3=1 items=0 ppid=3133 pid=5715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:53.191000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:53.228555 kernel: audit: type=1327 audit(1752280013.191:452): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:53.247306 env[1913]: time="2025-07-12T00:26:53.247210004Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:53.254908 env[1913]: time="2025-07-12T00:26:53.254851354Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:53.258975 env[1913]: time="2025-07-12T00:26:53.258917348Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:53.262793 env[1913]: time="2025-07-12T00:26:53.262733967Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:53.266056 env[1913]: time="2025-07-12T00:26:53.264744032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 12 00:26:53.277174 env[1913]: time="2025-07-12T00:26:53.277119029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 12 00:26:53.292129 env[1913]: time="2025-07-12T00:26:53.291875149Z" level=info msg="CreateContainer within sandbox \"d3ab911008595b5f310545caa2e0ebbb4f7225447171e7cd4e62d1efee834b32\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 12 00:26:53.342885 env[1913]: time="2025-07-12T00:26:53.342803591Z" level=info msg="CreateContainer within sandbox \"d3ab911008595b5f310545caa2e0ebbb4f7225447171e7cd4e62d1efee834b32\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e9bf6cbf89b0e5a8b36a6e2da654e504313ac059980a57e9e15a289627fb7b83\"" Jul 12 00:26:53.346577 env[1913]: time="2025-07-12T00:26:53.346518437Z" level=info msg="StartContainer for \"e9bf6cbf89b0e5a8b36a6e2da654e504313ac059980a57e9e15a289627fb7b83\"" Jul 12 00:26:53.387000 audit[5716]: USER_ACCT pid=5716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:53.404392 sshd[5716]: Accepted publickey for core from 147.75.109.163 port 41946 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:26:53.407149 sshd[5716]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:26:53.404000 audit[5716]: CRED_ACQ pid=5716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:53.417995 kernel: audit: type=1101 audit(1752280013.387:453): pid=5716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:53.418141 kernel: audit: type=1103 audit(1752280013.404:454): pid=5716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:53.427446 systemd[1]: Started session-10.scope. Jul 12 00:26:53.428486 systemd-logind[1905]: New session 10 of user core. Jul 12 00:26:53.438699 kernel: audit: type=1006 audit(1752280013.404:455): pid=5716 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jul 12 00:26:53.404000 audit[5716]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd2008e00 a2=3 a3=1 items=0 ppid=1 pid=5716 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:53.404000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:26:53.453000 audit[5716]: USER_START pid=5716 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:53.457000 audit[5728]: CRED_ACQ pid=5728 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:53.701440 env[1913]: time="2025-07-12T00:26:53.701299900Z" level=info msg="StartContainer for \"e9bf6cbf89b0e5a8b36a6e2da654e504313ac059980a57e9e15a289627fb7b83\" returns successfully" Jul 12 00:26:53.854458 sshd[5716]: pam_unix(sshd:session): session closed for user core Jul 12 00:26:53.854000 audit[5716]: USER_END pid=5716 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:53.855000 audit[5716]: CRED_DISP pid=5716 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:53.859928 systemd-logind[1905]: Session 10 logged out. Waiting for processes to exit. Jul 12 00:26:53.862852 systemd[1]: sshd@9-172.31.29.120:22-147.75.109.163:41946.service: Deactivated successfully. Jul 12 00:26:53.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.29.120:22-147.75.109.163:41946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:53.864556 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 00:26:53.867732 systemd-logind[1905]: Removed session 10. Jul 12 00:26:53.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.29.120:22-147.75.109.163:41958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:53.881659 systemd[1]: Started sshd@10-172.31.29.120:22-147.75.109.163:41958.service. Jul 12 00:26:54.066647 sshd[5777]: Accepted publickey for core from 147.75.109.163 port 41958 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:26:54.065000 audit[5777]: USER_ACCT pid=5777 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:54.067000 audit[5777]: CRED_ACQ pid=5777 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:54.068000 audit[5777]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffed717e30 a2=3 a3=1 items=0 ppid=1 pid=5777 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:54.068000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:26:54.070011 sshd[5777]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:26:54.086587 systemd[1]: Started session-11.scope. Jul 12 00:26:54.088893 systemd-logind[1905]: New session 11 of user core. Jul 12 00:26:54.112000 audit[5777]: USER_START pid=5777 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:54.114000 audit[5784]: CRED_ACQ pid=5784 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:54.237149 kubelet[2983]: I0712 00:26:54.236584 2983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-b9c4d9bf9-swqxk" podStartSLOduration=30.786160067 podStartE2EDuration="40.2365601s" podCreationTimestamp="2025-07-12 00:26:14 +0000 UTC" firstStartedPulling="2025-07-12 00:26:43.818088017 +0000 UTC m=+60.791289013" lastFinishedPulling="2025-07-12 00:26:53.268488038 +0000 UTC m=+70.241689046" observedRunningTime="2025-07-12 00:26:54.056487049 +0000 UTC m=+71.029688081" watchObservedRunningTime="2025-07-12 00:26:54.2365601 +0000 UTC m=+71.209761096" Jul 12 00:26:54.570376 sshd[5777]: pam_unix(sshd:session): session closed for user core Jul 12 00:26:54.571000 audit[5777]: USER_END pid=5777 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:54.571000 audit[5777]: CRED_DISP pid=5777 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:54.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.29.120:22-147.75.109.163:41958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:54.577077 systemd[1]: sshd@10-172.31.29.120:22-147.75.109.163:41958.service: Deactivated successfully. Jul 12 00:26:54.579036 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 00:26:54.587028 systemd-logind[1905]: Session 11 logged out. Waiting for processes to exit. Jul 12 00:26:54.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.29.120:22-147.75.109.163:41968 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:54.595164 systemd[1]: Started sshd@11-172.31.29.120:22-147.75.109.163:41968.service. Jul 12 00:26:54.609932 systemd-logind[1905]: Removed session 11. Jul 12 00:26:54.825000 audit[5810]: USER_ACCT pid=5810 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:54.827673 sshd[5810]: Accepted publickey for core from 147.75.109.163 port 41968 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:26:54.829000 audit[5810]: CRED_ACQ pid=5810 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:54.830000 audit[5810]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcd1e0490 a2=3 a3=1 items=0 ppid=1 pid=5810 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:54.830000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:26:54.833005 sshd[5810]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:26:54.847978 systemd[1]: Started session-12.scope. Jul 12 00:26:54.849328 systemd-logind[1905]: New session 12 of user core. Jul 12 00:26:54.864000 audit[5810]: USER_START pid=5810 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:54.866000 audit[5813]: CRED_ACQ pid=5813 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:55.199165 sshd[5810]: pam_unix(sshd:session): session closed for user core Jul 12 00:26:55.201000 audit[5810]: USER_END pid=5810 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:55.204000 audit[5810]: CRED_DISP pid=5810 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:26:55.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.29.120:22-147.75.109.163:41968 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:26:55.203327 systemd[1]: run-containerd-runc-k8s.io-e9bf6cbf89b0e5a8b36a6e2da654e504313ac059980a57e9e15a289627fb7b83-runc.RiAGDE.mount: Deactivated successfully. Jul 12 00:26:55.211278 systemd-logind[1905]: Session 12 logged out. Waiting for processes to exit. Jul 12 00:26:55.215421 systemd[1]: sshd@11-172.31.29.120:22-147.75.109.163:41968.service: Deactivated successfully. Jul 12 00:26:55.217134 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 00:26:55.225382 systemd-logind[1905]: Removed session 12. Jul 12 00:26:56.149468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount43360111.mount: Deactivated successfully. Jul 12 00:26:57.453199 env[1913]: time="2025-07-12T00:26:57.453139774Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:57.460951 env[1913]: time="2025-07-12T00:26:57.460895848Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:57.467091 env[1913]: time="2025-07-12T00:26:57.467032668Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:57.472938 env[1913]: time="2025-07-12T00:26:57.472885553Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:57.476139 env[1913]: time="2025-07-12T00:26:57.474734106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 12 00:26:57.480322 env[1913]: time="2025-07-12T00:26:57.480265603Z" level=info msg="CreateContainer within sandbox \"2e8195d46cf0920bcb43e69b01a439a8aeb2d7d2591cec18fa06f734311cd78a\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 12 00:26:57.480957 env[1913]: time="2025-07-12T00:26:57.480913023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 12 00:26:57.521129 env[1913]: time="2025-07-12T00:26:57.521044167Z" level=info msg="CreateContainer within sandbox \"2e8195d46cf0920bcb43e69b01a439a8aeb2d7d2591cec18fa06f734311cd78a\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2e09855f4a8f003eb5fdb3d582c84cdbc012a20bcaf513ba338317d484a84415\"" Jul 12 00:26:57.523494 env[1913]: time="2025-07-12T00:26:57.523442183Z" level=info msg="StartContainer for \"2e09855f4a8f003eb5fdb3d582c84cdbc012a20bcaf513ba338317d484a84415\"" Jul 12 00:26:57.603455 systemd[1]: run-containerd-runc-k8s.io-2e09855f4a8f003eb5fdb3d582c84cdbc012a20bcaf513ba338317d484a84415-runc.ytcW4O.mount: Deactivated successfully. Jul 12 00:26:57.755557 env[1913]: time="2025-07-12T00:26:57.755403495Z" level=info msg="StartContainer for \"2e09855f4a8f003eb5fdb3d582c84cdbc012a20bcaf513ba338317d484a84415\" returns successfully" Jul 12 00:26:58.118000 audit[5890]: NETFILTER_CFG table=filter:132 family=2 entries=10 op=nft_register_rule pid=5890 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:58.118000 audit[5890]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffedc58530 a2=0 a3=1 items=0 ppid=3133 pid=5890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:58.118000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:58.126000 audit[5890]: NETFILTER_CFG table=nat:133 family=2 entries=24 op=nft_register_rule pid=5890 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:26:58.126000 audit[5890]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7308 a0=3 a1=ffffedc58530 a2=0 a3=1 items=0 ppid=3133 pid=5890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:26:58.126000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:26:59.115133 env[1913]: time="2025-07-12T00:26:59.115073915Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:59.119577 env[1913]: time="2025-07-12T00:26:59.119517655Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:59.123319 env[1913]: time="2025-07-12T00:26:59.123251094Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:59.132602 env[1913]: time="2025-07-12T00:26:59.132545416Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:26:59.133416 env[1913]: time="2025-07-12T00:26:59.133367894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 12 00:26:59.149272 env[1913]: time="2025-07-12T00:26:59.145527132Z" level=info msg="CreateContainer within sandbox \"89020aaf13edd8f4e41e6352c6e10f4894246d1f5f0a17682ab1687e04ee8af7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 12 00:26:59.181180 env[1913]: time="2025-07-12T00:26:59.181089881Z" level=info msg="CreateContainer within sandbox \"89020aaf13edd8f4e41e6352c6e10f4894246d1f5f0a17682ab1687e04ee8af7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"55343100024e11420562653a634d1880f7a1a6dfe1a7d00ee8db5ed389954ad5\"" Jul 12 00:26:59.183592 env[1913]: time="2025-07-12T00:26:59.183535583Z" level=info msg="StartContainer for \"55343100024e11420562653a634d1880f7a1a6dfe1a7d00ee8db5ed389954ad5\"" Jul 12 00:26:59.379489 env[1913]: time="2025-07-12T00:26:59.379334063Z" level=info msg="StartContainer for \"55343100024e11420562653a634d1880f7a1a6dfe1a7d00ee8db5ed389954ad5\" returns successfully" Jul 12 00:26:59.605930 kubelet[2983]: I0712 00:26:59.605890 2983 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 12 00:26:59.606694 kubelet[2983]: I0712 00:26:59.606670 2983 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 12 00:27:00.079465 kubelet[2983]: I0712 00:27:00.079357 2983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-p759q" podStartSLOduration=33.455519963 podStartE2EDuration="47.07933216s" podCreationTimestamp="2025-07-12 00:26:13 +0000 UTC" firstStartedPulling="2025-07-12 00:26:43.854110087 +0000 UTC m=+60.827311083" lastFinishedPulling="2025-07-12 00:26:57.477922272 +0000 UTC m=+74.451123280" observedRunningTime="2025-07-12 00:26:58.074039409 +0000 UTC m=+75.047240417" watchObservedRunningTime="2025-07-12 00:27:00.07933216 +0000 UTC m=+77.052533156" Jul 12 00:27:00.224176 systemd[1]: Started sshd@12-172.31.29.120:22-147.75.109.163:43360.service. Jul 12 00:27:00.227743 kernel: kauditd_printk_skb: 35 callbacks suppressed Jul 12 00:27:00.227841 kernel: audit: type=1130 audit(1752280020.223:481): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.29.120:22-147.75.109.163:43360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:00.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.29.120:22-147.75.109.163:43360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:00.425000 audit[5966]: USER_ACCT pid=5966 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:00.426665 sshd[5966]: Accepted publickey for core from 147.75.109.163 port 43360 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:00.437305 kernel: audit: type=1101 audit(1752280020.425:482): pid=5966 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:00.437000 audit[5966]: CRED_ACQ pid=5966 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:00.440582 sshd[5966]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:00.455462 kernel: audit: type=1103 audit(1752280020.437:483): pid=5966 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:00.455571 kernel: audit: type=1006 audit(1752280020.437:484): pid=5966 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jul 12 00:27:00.437000 audit[5966]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffebc1940 a2=3 a3=1 items=0 ppid=1 pid=5966 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:00.466279 kernel: audit: type=1300 audit(1752280020.437:484): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffebc1940 a2=3 a3=1 items=0 ppid=1 pid=5966 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:00.437000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:27:00.470590 kernel: audit: type=1327 audit(1752280020.437:484): proctitle=737368643A20636F7265205B707269765D Jul 12 00:27:00.477789 systemd-logind[1905]: New session 13 of user core. Jul 12 00:27:00.478943 systemd[1]: Started session-13.scope. Jul 12 00:27:00.494000 audit[5966]: USER_START pid=5966 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:00.497000 audit[5969]: CRED_ACQ pid=5969 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:00.510336 kernel: audit: type=1105 audit(1752280020.494:485): pid=5966 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:00.520272 kernel: audit: type=1103 audit(1752280020.497:486): pid=5969 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:00.773414 sshd[5966]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:00.774000 audit[5966]: USER_END pid=5966 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:00.780268 systemd[1]: sshd@12-172.31.29.120:22-147.75.109.163:43360.service: Deactivated successfully. Jul 12 00:27:00.782405 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 00:27:00.775000 audit[5966]: CRED_DISP pid=5966 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:00.797260 kernel: audit: type=1106 audit(1752280020.774:487): pid=5966 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:00.797454 kernel: audit: type=1104 audit(1752280020.775:488): pid=5966 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:00.797523 systemd-logind[1905]: Session 13 logged out. Waiting for processes to exit. Jul 12 00:27:00.799387 systemd-logind[1905]: Removed session 13. Jul 12 00:27:00.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.29.120:22-147.75.109.163:43360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:05.802950 systemd[1]: Started sshd@13-172.31.29.120:22-147.75.109.163:43372.service. Jul 12 00:27:05.815441 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 12 00:27:05.815572 kernel: audit: type=1130 audit(1752280025.803:490): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.29.120:22-147.75.109.163:43372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:05.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.29.120:22-147.75.109.163:43372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:05.985000 audit[5987]: USER_ACCT pid=5987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:05.989499 sshd[5987]: Accepted publickey for core from 147.75.109.163 port 43372 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:05.993978 sshd[5987]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:05.991000 audit[5987]: CRED_ACQ pid=5987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:05.998367 kernel: audit: type=1101 audit(1752280025.985:491): pid=5987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:05.998475 kernel: audit: type=1103 audit(1752280025.991:492): pid=5987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:06.014289 kernel: audit: type=1006 audit(1752280025.992:493): pid=5987 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jul 12 00:27:05.992000 audit[5987]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc3149b10 a2=3 a3=1 items=0 ppid=1 pid=5987 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:06.024790 kernel: audit: type=1300 audit(1752280025.992:493): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc3149b10 a2=3 a3=1 items=0 ppid=1 pid=5987 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:05.992000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:27:06.028559 kernel: audit: type=1327 audit(1752280025.992:493): proctitle=737368643A20636F7265205B707269765D Jul 12 00:27:06.033531 systemd-logind[1905]: New session 14 of user core. Jul 12 00:27:06.035478 systemd[1]: Started session-14.scope. Jul 12 00:27:06.044000 audit[5987]: USER_START pid=5987 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:06.059452 kernel: audit: type=1105 audit(1752280026.044:494): pid=5987 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:06.047000 audit[5990]: CRED_ACQ pid=5990 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:06.075352 kernel: audit: type=1103 audit(1752280026.047:495): pid=5990 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:06.412456 sshd[5987]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:06.413000 audit[5987]: USER_END pid=5987 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:06.417574 systemd[1]: sshd@13-172.31.29.120:22-147.75.109.163:43372.service: Deactivated successfully. Jul 12 00:27:06.419004 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 00:27:06.428643 systemd-logind[1905]: Session 14 logged out. Waiting for processes to exit. Jul 12 00:27:06.413000 audit[5987]: CRED_DISP pid=5987 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:06.437968 kernel: audit: type=1106 audit(1752280026.413:496): pid=5987 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:06.438142 kernel: audit: type=1104 audit(1752280026.413:497): pid=5987 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:06.439809 systemd-logind[1905]: Removed session 14. Jul 12 00:27:06.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.29.120:22-147.75.109.163:43372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:11.437645 systemd[1]: Started sshd@14-172.31.29.120:22-147.75.109.163:53068.service. Jul 12 00:27:11.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.29.120:22-147.75.109.163:53068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:11.441270 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 12 00:27:11.441410 kernel: audit: type=1130 audit(1752280031.437:499): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.29.120:22-147.75.109.163:53068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:11.654950 sshd[6000]: Accepted publickey for core from 147.75.109.163 port 53068 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:11.653000 audit[6000]: USER_ACCT pid=6000 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:11.669504 sshd[6000]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:11.690267 kernel: audit: type=1101 audit(1752280031.653:500): pid=6000 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:11.690432 kernel: audit: type=1103 audit(1752280031.666:501): pid=6000 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:11.666000 audit[6000]: CRED_ACQ pid=6000 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:11.698880 systemd[1]: Started session-15.scope. Jul 12 00:27:11.705626 kernel: audit: type=1006 audit(1752280031.667:502): pid=6000 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jul 12 00:27:11.706376 systemd-logind[1905]: New session 15 of user core. Jul 12 00:27:11.667000 audit[6000]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcef063b0 a2=3 a3=1 items=0 ppid=1 pid=6000 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:11.718456 kernel: audit: type=1300 audit(1752280031.667:502): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcef063b0 a2=3 a3=1 items=0 ppid=1 pid=6000 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:11.667000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:27:11.724048 kernel: audit: type=1327 audit(1752280031.667:502): proctitle=737368643A20636F7265205B707269765D Jul 12 00:27:11.733000 audit[6000]: USER_START pid=6000 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:11.747000 audit[6003]: CRED_ACQ pid=6003 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:11.762358 kernel: audit: type=1105 audit(1752280031.733:503): pid=6000 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:11.762512 kernel: audit: type=1103 audit(1752280031.747:504): pid=6003 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:12.101544 sshd[6000]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:12.102000 audit[6000]: USER_END pid=6000 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:12.119202 systemd[1]: sshd@14-172.31.29.120:22-147.75.109.163:53068.service: Deactivated successfully. Jul 12 00:27:12.120833 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 00:27:12.121628 systemd-logind[1905]: Session 15 logged out. Waiting for processes to exit. Jul 12 00:27:12.102000 audit[6000]: CRED_DISP pid=6000 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:12.131325 kernel: audit: type=1106 audit(1752280032.102:505): pid=6000 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:12.131516 kernel: audit: type=1104 audit(1752280032.102:506): pid=6000 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:12.132656 systemd-logind[1905]: Removed session 15. Jul 12 00:27:12.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.29.120:22-147.75.109.163:53068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:14.492111 systemd[1]: run-containerd-runc-k8s.io-42b9d6dc0035ca32ad37cd639945c7b3f5acda86cdae0a63724ad380d59dcb45-runc.cGlty8.mount: Deactivated successfully. Jul 12 00:27:17.127571 systemd[1]: Started sshd@15-172.31.29.120:22-147.75.109.163:50324.service. Jul 12 00:27:17.138395 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 12 00:27:17.138509 kernel: audit: type=1130 audit(1752280037.127:508): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.29.120:22-147.75.109.163:50324 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:17.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.29.120:22-147.75.109.163:50324 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:17.337464 sshd[6032]: Accepted publickey for core from 147.75.109.163 port 50324 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:17.336000 audit[6032]: USER_ACCT pid=6032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:17.349085 sshd[6032]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:17.347000 audit[6032]: CRED_ACQ pid=6032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:17.359681 systemd-logind[1905]: New session 16 of user core. Jul 12 00:27:17.361550 systemd[1]: Started session-16.scope. Jul 12 00:27:17.367850 kernel: audit: type=1101 audit(1752280037.336:509): pid=6032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:17.368009 kernel: audit: type=1103 audit(1752280037.347:510): pid=6032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:17.388179 kernel: audit: type=1006 audit(1752280037.347:511): pid=6032 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jul 12 00:27:17.347000 audit[6032]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcd269b60 a2=3 a3=1 items=0 ppid=1 pid=6032 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:17.413409 kernel: audit: type=1300 audit(1752280037.347:511): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcd269b60 a2=3 a3=1 items=0 ppid=1 pid=6032 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:17.432271 kernel: audit: type=1327 audit(1752280037.347:511): proctitle=737368643A20636F7265205B707269765D Jul 12 00:27:17.432418 kernel: audit: type=1105 audit(1752280037.390:512): pid=6032 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:17.347000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:27:17.390000 audit[6032]: USER_START pid=6032 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:17.393000 audit[6035]: CRED_ACQ pid=6035 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:17.455262 kernel: audit: type=1103 audit(1752280037.393:513): pid=6035 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:17.799389 sshd[6032]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:17.800000 audit[6032]: USER_END pid=6032 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:17.805737 systemd-logind[1905]: Session 16 logged out. Waiting for processes to exit. Jul 12 00:27:17.816878 systemd[1]: sshd@15-172.31.29.120:22-147.75.109.163:50324.service: Deactivated successfully. Jul 12 00:27:17.801000 audit[6032]: CRED_DISP pid=6032 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:17.834398 kernel: audit: type=1106 audit(1752280037.800:514): pid=6032 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:17.834610 kernel: audit: type=1104 audit(1752280037.801:515): pid=6032 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:17.835705 systemd[1]: Started sshd@16-172.31.29.120:22-147.75.109.163:50334.service. Jul 12 00:27:17.838713 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 00:27:17.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.29.120:22-147.75.109.163:50324 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:17.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.29.120:22-147.75.109.163:50334 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:17.851184 systemd-logind[1905]: Removed session 16. Jul 12 00:27:18.040033 sshd[6045]: Accepted publickey for core from 147.75.109.163 port 50334 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:18.038000 audit[6045]: USER_ACCT pid=6045 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:18.040000 audit[6045]: CRED_ACQ pid=6045 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:18.040000 audit[6045]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff8c31550 a2=3 a3=1 items=0 ppid=1 pid=6045 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:18.040000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:27:18.042982 sshd[6045]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:18.052575 systemd-logind[1905]: New session 17 of user core. Jul 12 00:27:18.057799 systemd[1]: Started session-17.scope. Jul 12 00:27:18.073000 audit[6045]: USER_START pid=6045 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:18.076000 audit[6048]: CRED_ACQ pid=6048 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:18.855516 systemd[1]: run-containerd-runc-k8s.io-2e09855f4a8f003eb5fdb3d582c84cdbc012a20bcaf513ba338317d484a84415-runc.DZ1S1I.mount: Deactivated successfully. Jul 12 00:27:18.875325 sshd[6045]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:18.880000 audit[6045]: USER_END pid=6045 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:18.880000 audit[6045]: CRED_DISP pid=6045 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:18.885582 systemd[1]: sshd@16-172.31.29.120:22-147.75.109.163:50334.service: Deactivated successfully. Jul 12 00:27:18.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.29.120:22-147.75.109.163:50334 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:18.891881 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 00:27:18.892731 systemd-logind[1905]: Session 17 logged out. Waiting for processes to exit. Jul 12 00:27:18.894298 systemd-logind[1905]: Removed session 17. Jul 12 00:27:18.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.29.120:22-147.75.109.163:50338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:18.902386 systemd[1]: Started sshd@17-172.31.29.120:22-147.75.109.163:50338.service. Jul 12 00:27:19.148391 sshd[6068]: Accepted publickey for core from 147.75.109.163 port 50338 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:19.147000 audit[6068]: USER_ACCT pid=6068 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:19.151025 sshd[6068]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:19.148000 audit[6068]: CRED_ACQ pid=6068 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:19.149000 audit[6068]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffddaec700 a2=3 a3=1 items=0 ppid=1 pid=6068 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:19.149000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:27:19.160320 systemd-logind[1905]: New session 18 of user core. Jul 12 00:27:19.163174 systemd[1]: Started session-18.scope. Jul 12 00:27:19.173000 audit[6068]: USER_START pid=6068 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:19.180000 audit[6082]: CRED_ACQ pid=6082 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:23.818493 sshd[6068]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:23.836797 kernel: kauditd_printk_skb: 20 callbacks suppressed Jul 12 00:27:23.836978 kernel: audit: type=1106 audit(1752280043.820:532): pid=6068 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:23.820000 audit[6068]: USER_END pid=6068 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:23.839353 systemd[1]: sshd@17-172.31.29.120:22-147.75.109.163:50338.service: Deactivated successfully. Jul 12 00:27:23.842368 systemd-logind[1905]: Session 18 logged out. Waiting for processes to exit. Jul 12 00:27:23.849534 systemd[1]: Started sshd@18-172.31.29.120:22-147.75.109.163:50342.service. Jul 12 00:27:23.820000 audit[6068]: CRED_DISP pid=6068 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:23.863320 kernel: audit: type=1104 audit(1752280043.820:533): pid=6068 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:23.854167 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 00:27:23.877363 systemd-logind[1905]: Removed session 18. Jul 12 00:27:23.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.29.120:22-147.75.109.163:50338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:23.891593 amazon-ssm-agent[1887]: 2025-07-12 00:27:23 INFO [HealthCheck] HealthCheck reporting agent health. Jul 12 00:27:23.906717 kernel: audit: type=1131 audit(1752280043.838:534): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.29.120:22-147.75.109.163:50338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:23.907159 kernel: audit: type=1130 audit(1752280043.850:535): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.29.120:22-147.75.109.163:50342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:23.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.29.120:22-147.75.109.163:50342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:23.932000 audit[6106]: NETFILTER_CFG table=filter:134 family=2 entries=22 op=nft_register_rule pid=6106 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:27:23.932000 audit[6106]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=12688 a0=3 a1=ffffd7958650 a2=0 a3=1 items=0 ppid=3133 pid=6106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:23.953058 kernel: audit: type=1325 audit(1752280043.932:536): table=filter:134 family=2 entries=22 op=nft_register_rule pid=6106 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:27:23.953242 kernel: audit: type=1300 audit(1752280043.932:536): arch=c00000b7 syscall=211 success=yes exit=12688 a0=3 a1=ffffd7958650 a2=0 a3=1 items=0 ppid=3133 pid=6106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:23.953333 kernel: audit: type=1327 audit(1752280043.932:536): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:27:23.932000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:27:23.961000 audit[6106]: NETFILTER_CFG table=nat:135 family=2 entries=24 op=nft_register_rule pid=6106 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:27:23.971087 kernel: audit: type=1325 audit(1752280043.961:537): table=nat:135 family=2 entries=24 op=nft_register_rule pid=6106 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:27:23.961000 audit[6106]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7308 a0=3 a1=ffffd7958650 a2=0 a3=1 items=0 ppid=3133 pid=6106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:23.985992 kernel: audit: type=1300 audit(1752280043.961:537): arch=c00000b7 syscall=211 success=yes exit=7308 a0=3 a1=ffffd7958650 a2=0 a3=1 items=0 ppid=3133 pid=6106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:23.961000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:27:23.992626 kernel: audit: type=1327 audit(1752280043.961:537): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:27:24.023000 audit[6109]: NETFILTER_CFG table=filter:136 family=2 entries=34 op=nft_register_rule pid=6109 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:27:24.023000 audit[6109]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=12688 a0=3 a1=ffffc6d347a0 a2=0 a3=1 items=0 ppid=3133 pid=6109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:24.023000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:27:24.038000 audit[6109]: NETFILTER_CFG table=nat:137 family=2 entries=24 op=nft_register_rule pid=6109 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:27:24.038000 audit[6109]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7308 a0=3 a1=ffffc6d347a0 a2=0 a3=1 items=0 ppid=3133 pid=6109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:24.038000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:27:24.089000 audit[6104]: USER_ACCT pid=6104 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:24.092350 sshd[6104]: Accepted publickey for core from 147.75.109.163 port 50342 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:24.094684 sshd[6104]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:24.092000 audit[6104]: CRED_ACQ pid=6104 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:24.092000 audit[6104]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffc2233a0 a2=3 a3=1 items=0 ppid=1 pid=6104 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:24.092000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:27:24.105702 systemd[1]: Started session-19.scope. Jul 12 00:27:24.106407 systemd-logind[1905]: New session 19 of user core. Jul 12 00:27:24.118000 audit[6104]: USER_START pid=6104 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:24.121000 audit[6111]: CRED_ACQ pid=6111 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:25.046098 sshd[6104]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:25.046000 audit[6104]: USER_END pid=6104 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:25.047000 audit[6104]: CRED_DISP pid=6104 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:25.052315 systemd-logind[1905]: Session 19 logged out. Waiting for processes to exit. Jul 12 00:27:25.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.29.120:22-147.75.109.163:50342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:25.054723 systemd[1]: sshd@18-172.31.29.120:22-147.75.109.163:50342.service: Deactivated successfully. Jul 12 00:27:25.056368 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 00:27:25.057403 systemd-logind[1905]: Removed session 19. Jul 12 00:27:25.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.29.120:22-147.75.109.163:50346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:25.070806 systemd[1]: Started sshd@19-172.31.29.120:22-147.75.109.163:50346.service. Jul 12 00:27:25.219475 systemd[1]: run-containerd-runc-k8s.io-e9bf6cbf89b0e5a8b36a6e2da654e504313ac059980a57e9e15a289627fb7b83-runc.r8OCag.mount: Deactivated successfully. Jul 12 00:27:25.276000 audit[6120]: USER_ACCT pid=6120 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:25.278187 sshd[6120]: Accepted publickey for core from 147.75.109.163 port 50346 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:25.278000 audit[6120]: CRED_ACQ pid=6120 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:25.279000 audit[6120]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe3743800 a2=3 a3=1 items=0 ppid=1 pid=6120 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:25.279000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:27:25.280949 sshd[6120]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:25.294511 systemd[1]: Started session-20.scope. Jul 12 00:27:25.295804 systemd-logind[1905]: New session 20 of user core. Jul 12 00:27:25.326000 audit[6120]: USER_START pid=6120 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:25.329000 audit[6147]: CRED_ACQ pid=6147 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:25.674888 sshd[6120]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:25.677000 audit[6120]: USER_END pid=6120 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:25.677000 audit[6120]: CRED_DISP pid=6120 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:25.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.29.120:22-147.75.109.163:50346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:25.681856 systemd[1]: sshd@19-172.31.29.120:22-147.75.109.163:50346.service: Deactivated successfully. Jul 12 00:27:25.684712 systemd[1]: session-20.scope: Deactivated successfully. Jul 12 00:27:25.685976 systemd-logind[1905]: Session 20 logged out. Waiting for processes to exit. Jul 12 00:27:25.689821 systemd-logind[1905]: Removed session 20. Jul 12 00:27:25.765997 kubelet[2983]: I0712 00:27:25.765888 2983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-g7wxf" podStartSLOduration=52.980055348 podStartE2EDuration="1m11.765865687s" podCreationTimestamp="2025-07-12 00:26:14 +0000 UTC" firstStartedPulling="2025-07-12 00:26:40.357088855 +0000 UTC m=+57.330289851" lastFinishedPulling="2025-07-12 00:26:59.142899194 +0000 UTC m=+76.116100190" observedRunningTime="2025-07-12 00:27:00.079939079 +0000 UTC m=+77.053140111" watchObservedRunningTime="2025-07-12 00:27:25.765865687 +0000 UTC m=+102.739066695" Jul 12 00:27:25.823000 audit[6172]: NETFILTER_CFG table=filter:138 family=2 entries=33 op=nft_register_rule pid=6172 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:27:25.823000 audit[6172]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11944 a0=3 a1=ffffc64edb00 a2=0 a3=1 items=0 ppid=3133 pid=6172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:25.823000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:27:25.832000 audit[6172]: NETFILTER_CFG table=nat:139 family=2 entries=31 op=nft_register_chain pid=6172 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:27:25.832000 audit[6172]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10884 a0=3 a1=ffffc64edb00 a2=0 a3=1 items=0 ppid=3133 pid=6172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:25.832000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:27:30.704709 systemd[1]: Started sshd@20-172.31.29.120:22-147.75.109.163:53256.service. Jul 12 00:27:30.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.29.120:22-147.75.109.163:53256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:30.714199 kernel: kauditd_printk_skb: 33 callbacks suppressed Jul 12 00:27:30.714342 kernel: audit: type=1130 audit(1752280050.703:559): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.29.120:22-147.75.109.163:53256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:30.885000 audit[6173]: USER_ACCT pid=6173 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:30.886831 sshd[6173]: Accepted publickey for core from 147.75.109.163 port 53256 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:30.899306 kernel: audit: type=1101 audit(1752280050.885:560): pid=6173 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:30.897000 audit[6173]: CRED_ACQ pid=6173 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:30.900292 sshd[6173]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:30.916538 kernel: audit: type=1103 audit(1752280050.897:561): pid=6173 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:30.916659 kernel: audit: type=1006 audit(1752280050.898:562): pid=6173 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jul 12 00:27:30.910838 systemd[1]: Started session-21.scope. Jul 12 00:27:30.916750 systemd-logind[1905]: New session 21 of user core. Jul 12 00:27:30.898000 audit[6173]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff7df52e0 a2=3 a3=1 items=0 ppid=1 pid=6173 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:30.928473 kernel: audit: type=1300 audit(1752280050.898:562): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff7df52e0 a2=3 a3=1 items=0 ppid=1 pid=6173 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:30.898000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:27:30.940801 kernel: audit: type=1327 audit(1752280050.898:562): proctitle=737368643A20636F7265205B707269765D Jul 12 00:27:30.930000 audit[6173]: USER_START pid=6173 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:30.952582 kernel: audit: type=1105 audit(1752280050.930:563): pid=6173 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:30.934000 audit[6176]: CRED_ACQ pid=6176 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:30.963805 kernel: audit: type=1103 audit(1752280050.934:564): pid=6176 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:31.275732 sshd[6173]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:31.276000 audit[6173]: USER_END pid=6173 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:31.291259 systemd[1]: sshd@20-172.31.29.120:22-147.75.109.163:53256.service: Deactivated successfully. Jul 12 00:27:31.279000 audit[6173]: CRED_DISP pid=6173 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:31.293047 systemd[1]: session-21.scope: Deactivated successfully. Jul 12 00:27:31.303740 kernel: audit: type=1106 audit(1752280051.276:565): pid=6173 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:31.303856 kernel: audit: type=1104 audit(1752280051.279:566): pid=6173 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:31.305541 systemd-logind[1905]: Session 21 logged out. Waiting for processes to exit. Jul 12 00:27:31.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.29.120:22-147.75.109.163:53256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:31.309027 systemd-logind[1905]: Removed session 21. Jul 12 00:27:32.674000 audit[6186]: NETFILTER_CFG table=filter:140 family=2 entries=20 op=nft_register_rule pid=6186 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:27:32.674000 audit[6186]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffd8fac550 a2=0 a3=1 items=0 ppid=3133 pid=6186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:32.674000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:27:32.682000 audit[6186]: NETFILTER_CFG table=nat:141 family=2 entries=110 op=nft_register_chain pid=6186 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:27:32.682000 audit[6186]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=50988 a0=3 a1=ffffd8fac550 a2=0 a3=1 items=0 ppid=3133 pid=6186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:32.682000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:27:36.313390 kernel: kauditd_printk_skb: 7 callbacks suppressed Jul 12 00:27:36.313521 kernel: audit: type=1130 audit(1752280056.300:570): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.29.120:22-147.75.109.163:39430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:36.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.29.120:22-147.75.109.163:39430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:36.301822 systemd[1]: Started sshd@21-172.31.29.120:22-147.75.109.163:39430.service. Jul 12 00:27:36.476069 sshd[6189]: Accepted publickey for core from 147.75.109.163 port 39430 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:36.474000 audit[6189]: USER_ACCT pid=6189 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:36.486527 sshd[6189]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:36.484000 audit[6189]: CRED_ACQ pid=6189 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:36.500965 systemd[1]: Started session-22.scope. Jul 12 00:27:36.501810 systemd-logind[1905]: New session 22 of user core. Jul 12 00:27:36.508348 kernel: audit: type=1101 audit(1752280056.474:571): pid=6189 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:36.508578 kernel: audit: type=1103 audit(1752280056.484:572): pid=6189 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:36.532272 kernel: audit: type=1006 audit(1752280056.484:573): pid=6189 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jul 12 00:27:36.484000 audit[6189]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff2eb92f0 a2=3 a3=1 items=0 ppid=1 pid=6189 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:36.549901 kernel: audit: type=1300 audit(1752280056.484:573): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff2eb92f0 a2=3 a3=1 items=0 ppid=1 pid=6189 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:36.484000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:27:36.572702 kernel: audit: type=1327 audit(1752280056.484:573): proctitle=737368643A20636F7265205B707269765D Jul 12 00:27:36.532000 audit[6189]: USER_START pid=6189 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:36.589658 kernel: audit: type=1105 audit(1752280056.532:574): pid=6189 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:36.536000 audit[6192]: CRED_ACQ pid=6192 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:36.612510 kernel: audit: type=1103 audit(1752280056.536:575): pid=6192 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:36.914030 sshd[6189]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:36.915000 audit[6189]: USER_END pid=6189 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:36.919795 systemd[1]: sshd@21-172.31.29.120:22-147.75.109.163:39430.service: Deactivated successfully. Jul 12 00:27:36.921482 systemd[1]: session-22.scope: Deactivated successfully. Jul 12 00:27:36.929199 systemd-logind[1905]: Session 22 logged out. Waiting for processes to exit. Jul 12 00:27:36.930990 systemd-logind[1905]: Removed session 22. Jul 12 00:27:36.915000 audit[6189]: CRED_DISP pid=6189 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:36.949440 kernel: audit: type=1106 audit(1752280056.915:576): pid=6189 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:36.949568 kernel: audit: type=1104 audit(1752280056.915:577): pid=6189 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:36.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.29.120:22-147.75.109.163:39430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:41.940571 systemd[1]: Started sshd@22-172.31.29.120:22-147.75.109.163:39434.service. Jul 12 00:27:41.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.29.120:22-147.75.109.163:39434 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:41.949251 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 12 00:27:41.949363 kernel: audit: type=1130 audit(1752280061.940:579): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.29.120:22-147.75.109.163:39434 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:42.118000 audit[6201]: USER_ACCT pid=6201 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:42.120104 sshd[6201]: Accepted publickey for core from 147.75.109.163 port 39434 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:42.130000 audit[6201]: CRED_ACQ pid=6201 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:42.133062 sshd[6201]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:42.141771 kernel: audit: type=1101 audit(1752280062.118:580): pid=6201 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:42.141939 kernel: audit: type=1103 audit(1752280062.130:581): pid=6201 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:42.150485 kernel: audit: type=1006 audit(1752280062.131:582): pid=6201 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jul 12 00:27:42.131000 audit[6201]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffff195e00 a2=3 a3=1 items=0 ppid=1 pid=6201 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:42.131000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:27:42.165565 kernel: audit: type=1300 audit(1752280062.131:582): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffff195e00 a2=3 a3=1 items=0 ppid=1 pid=6201 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:42.165703 kernel: audit: type=1327 audit(1752280062.131:582): proctitle=737368643A20636F7265205B707269765D Jul 12 00:27:42.169365 systemd-logind[1905]: New session 23 of user core. Jul 12 00:27:42.172085 systemd[1]: Started session-23.scope. Jul 12 00:27:42.185000 audit[6201]: USER_START pid=6201 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:42.185000 audit[6204]: CRED_ACQ pid=6204 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:42.215804 kernel: audit: type=1105 audit(1752280062.185:583): pid=6201 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:42.215950 kernel: audit: type=1103 audit(1752280062.185:584): pid=6204 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:42.490106 sshd[6201]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:42.492000 audit[6201]: USER_END pid=6201 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:42.497177 systemd[1]: sshd@22-172.31.29.120:22-147.75.109.163:39434.service: Deactivated successfully. Jul 12 00:27:42.498698 systemd[1]: session-23.scope: Deactivated successfully. Jul 12 00:27:42.507066 systemd-logind[1905]: Session 23 logged out. Waiting for processes to exit. Jul 12 00:27:42.508874 systemd-logind[1905]: Removed session 23. Jul 12 00:27:42.492000 audit[6201]: CRED_DISP pid=6201 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:42.524676 kernel: audit: type=1106 audit(1752280062.492:585): pid=6201 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:42.524829 kernel: audit: type=1104 audit(1752280062.492:586): pid=6201 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:42.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.29.120:22-147.75.109.163:39434 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:44.504143 systemd[1]: run-containerd-runc-k8s.io-42b9d6dc0035ca32ad37cd639945c7b3f5acda86cdae0a63724ad380d59dcb45-runc.I5FCKW.mount: Deactivated successfully. Jul 12 00:27:46.098918 env[1913]: time="2025-07-12T00:27:46.098862011Z" level=info msg="StopPodSandbox for \"503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3\"" Jul 12 00:27:46.308192 env[1913]: 2025-07-12 00:27:46.226 [WARNING][6245] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--gwch8-eth0", GenerateName:"calico-apiserver-8494455ff7-", Namespace:"calico-apiserver", SelfLink:"", UID:"7051a960-7ce8-45f1-8249-f71049b41599", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8494455ff7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-120", ContainerID:"2bce4a2ff669ed9ed27e9e09dd730c7cb2d117142ce4f2b40c355e3b5c893604", Pod:"calico-apiserver-8494455ff7-gwch8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic4941d131f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:46.308192 env[1913]: 2025-07-12 00:27:46.227 [INFO][6245] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" Jul 12 00:27:46.308192 env[1913]: 2025-07-12 00:27:46.227 [INFO][6245] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" iface="eth0" netns="" Jul 12 00:27:46.308192 env[1913]: 2025-07-12 00:27:46.227 [INFO][6245] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" Jul 12 00:27:46.308192 env[1913]: 2025-07-12 00:27:46.227 [INFO][6245] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" Jul 12 00:27:46.308192 env[1913]: 2025-07-12 00:27:46.283 [INFO][6252] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" HandleID="k8s-pod-network.503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" Workload="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--gwch8-eth0" Jul 12 00:27:46.308192 env[1913]: 2025-07-12 00:27:46.284 [INFO][6252] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:46.308192 env[1913]: 2025-07-12 00:27:46.284 [INFO][6252] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:46.308192 env[1913]: 2025-07-12 00:27:46.298 [WARNING][6252] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" HandleID="k8s-pod-network.503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" Workload="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--gwch8-eth0" Jul 12 00:27:46.308192 env[1913]: 2025-07-12 00:27:46.298 [INFO][6252] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" HandleID="k8s-pod-network.503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" Workload="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--gwch8-eth0" Jul 12 00:27:46.308192 env[1913]: 2025-07-12 00:27:46.300 [INFO][6252] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:46.308192 env[1913]: 2025-07-12 00:27:46.303 [INFO][6245] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" Jul 12 00:27:46.309291 env[1913]: time="2025-07-12T00:27:46.309214248Z" level=info msg="TearDown network for sandbox \"503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3\" successfully" Jul 12 00:27:46.309428 env[1913]: time="2025-07-12T00:27:46.309395377Z" level=info msg="StopPodSandbox for \"503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3\" returns successfully" Jul 12 00:27:46.310352 env[1913]: time="2025-07-12T00:27:46.310303290Z" level=info msg="RemovePodSandbox for \"503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3\"" Jul 12 00:27:46.310844 env[1913]: time="2025-07-12T00:27:46.310768749Z" level=info msg="Forcibly stopping sandbox \"503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3\"" Jul 12 00:27:46.476395 env[1913]: 2025-07-12 00:27:46.399 [WARNING][6268] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--gwch8-eth0", GenerateName:"calico-apiserver-8494455ff7-", Namespace:"calico-apiserver", SelfLink:"", UID:"7051a960-7ce8-45f1-8249-f71049b41599", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8494455ff7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-120", ContainerID:"2bce4a2ff669ed9ed27e9e09dd730c7cb2d117142ce4f2b40c355e3b5c893604", Pod:"calico-apiserver-8494455ff7-gwch8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic4941d131f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:46.476395 env[1913]: 2025-07-12 00:27:46.400 [INFO][6268] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" Jul 12 00:27:46.476395 env[1913]: 2025-07-12 00:27:46.400 [INFO][6268] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" iface="eth0" netns="" Jul 12 00:27:46.476395 env[1913]: 2025-07-12 00:27:46.400 [INFO][6268] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" Jul 12 00:27:46.476395 env[1913]: 2025-07-12 00:27:46.400 [INFO][6268] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" Jul 12 00:27:46.476395 env[1913]: 2025-07-12 00:27:46.448 [INFO][6275] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" HandleID="k8s-pod-network.503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" Workload="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--gwch8-eth0" Jul 12 00:27:46.476395 env[1913]: 2025-07-12 00:27:46.449 [INFO][6275] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:46.476395 env[1913]: 2025-07-12 00:27:46.449 [INFO][6275] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:46.476395 env[1913]: 2025-07-12 00:27:46.468 [WARNING][6275] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" HandleID="k8s-pod-network.503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" Workload="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--gwch8-eth0" Jul 12 00:27:46.476395 env[1913]: 2025-07-12 00:27:46.468 [INFO][6275] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" HandleID="k8s-pod-network.503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" Workload="ip--172--31--29--120-k8s-calico--apiserver--8494455ff7--gwch8-eth0" Jul 12 00:27:46.476395 env[1913]: 2025-07-12 00:27:46.470 [INFO][6275] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:46.476395 env[1913]: 2025-07-12 00:27:46.473 [INFO][6268] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3" Jul 12 00:27:46.477292 env[1913]: time="2025-07-12T00:27:46.477157678Z" level=info msg="TearDown network for sandbox \"503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3\" successfully" Jul 12 00:27:46.486024 env[1913]: time="2025-07-12T00:27:46.485897417Z" level=info msg="RemovePodSandbox \"503e3f2fa0e0bc44a3e8115f6cfaacfaed60fe45d4f3ba343849596171ac08d3\" returns successfully" Jul 12 00:27:46.486699 env[1913]: time="2025-07-12T00:27:46.486638722Z" level=info msg="StopPodSandbox for \"607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a\"" Jul 12 00:27:46.689570 env[1913]: 2025-07-12 00:27:46.572 [WARNING][6291] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--120-k8s-goldmane--58fd7646b9--p759q-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"213fb0de-6a80-4aa5-aeb1-a0af932ccfc6", ResourceVersion:"1322", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-120", ContainerID:"2e8195d46cf0920bcb43e69b01a439a8aeb2d7d2591cec18fa06f734311cd78a", Pod:"goldmane-58fd7646b9-p759q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.107.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali82a774ccf3a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:46.689570 env[1913]: 2025-07-12 00:27:46.573 [INFO][6291] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" Jul 12 00:27:46.689570 env[1913]: 2025-07-12 00:27:46.573 [INFO][6291] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" iface="eth0" netns="" Jul 12 00:27:46.689570 env[1913]: 2025-07-12 00:27:46.573 [INFO][6291] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" Jul 12 00:27:46.689570 env[1913]: 2025-07-12 00:27:46.573 [INFO][6291] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" Jul 12 00:27:46.689570 env[1913]: 2025-07-12 00:27:46.662 [INFO][6298] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" HandleID="k8s-pod-network.607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" Workload="ip--172--31--29--120-k8s-goldmane--58fd7646b9--p759q-eth0" Jul 12 00:27:46.689570 env[1913]: 2025-07-12 00:27:46.663 [INFO][6298] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:46.689570 env[1913]: 2025-07-12 00:27:46.663 [INFO][6298] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:46.689570 env[1913]: 2025-07-12 00:27:46.677 [WARNING][6298] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" HandleID="k8s-pod-network.607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" Workload="ip--172--31--29--120-k8s-goldmane--58fd7646b9--p759q-eth0" Jul 12 00:27:46.689570 env[1913]: 2025-07-12 00:27:46.677 [INFO][6298] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" HandleID="k8s-pod-network.607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" Workload="ip--172--31--29--120-k8s-goldmane--58fd7646b9--p759q-eth0" Jul 12 00:27:46.689570 env[1913]: 2025-07-12 00:27:46.680 [INFO][6298] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:46.689570 env[1913]: 2025-07-12 00:27:46.685 [INFO][6291] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" Jul 12 00:27:46.689570 env[1913]: time="2025-07-12T00:27:46.688856996Z" level=info msg="TearDown network for sandbox \"607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a\" successfully" Jul 12 00:27:46.689570 env[1913]: time="2025-07-12T00:27:46.688938272Z" level=info msg="StopPodSandbox for \"607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a\" returns successfully" Jul 12 00:27:46.690595 env[1913]: time="2025-07-12T00:27:46.689666605Z" level=info msg="RemovePodSandbox for \"607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a\"" Jul 12 00:27:46.690595 env[1913]: time="2025-07-12T00:27:46.689719033Z" level=info msg="Forcibly stopping sandbox \"607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a\"" Jul 12 00:27:46.879435 env[1913]: 2025-07-12 00:27:46.791 [WARNING][6312] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--120-k8s-goldmane--58fd7646b9--p759q-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"213fb0de-6a80-4aa5-aeb1-a0af932ccfc6", ResourceVersion:"1322", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-120", ContainerID:"2e8195d46cf0920bcb43e69b01a439a8aeb2d7d2591cec18fa06f734311cd78a", Pod:"goldmane-58fd7646b9-p759q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.107.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali82a774ccf3a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:46.879435 env[1913]: 2025-07-12 00:27:46.792 [INFO][6312] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" Jul 12 00:27:46.879435 env[1913]: 2025-07-12 00:27:46.792 [INFO][6312] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" iface="eth0" netns="" Jul 12 00:27:46.879435 env[1913]: 2025-07-12 00:27:46.792 [INFO][6312] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" Jul 12 00:27:46.879435 env[1913]: 2025-07-12 00:27:46.792 [INFO][6312] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" Jul 12 00:27:46.879435 env[1913]: 2025-07-12 00:27:46.855 [INFO][6319] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" HandleID="k8s-pod-network.607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" Workload="ip--172--31--29--120-k8s-goldmane--58fd7646b9--p759q-eth0" Jul 12 00:27:46.879435 env[1913]: 2025-07-12 00:27:46.855 [INFO][6319] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:46.879435 env[1913]: 2025-07-12 00:27:46.855 [INFO][6319] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:46.879435 env[1913]: 2025-07-12 00:27:46.867 [WARNING][6319] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" HandleID="k8s-pod-network.607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" Workload="ip--172--31--29--120-k8s-goldmane--58fd7646b9--p759q-eth0" Jul 12 00:27:46.879435 env[1913]: 2025-07-12 00:27:46.867 [INFO][6319] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" HandleID="k8s-pod-network.607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" Workload="ip--172--31--29--120-k8s-goldmane--58fd7646b9--p759q-eth0" Jul 12 00:27:46.879435 env[1913]: 2025-07-12 00:27:46.870 [INFO][6319] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:46.879435 env[1913]: 2025-07-12 00:27:46.872 [INFO][6312] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a" Jul 12 00:27:46.879435 env[1913]: time="2025-07-12T00:27:46.875421196Z" level=info msg="TearDown network for sandbox \"607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a\" successfully" Jul 12 00:27:46.884587 env[1913]: time="2025-07-12T00:27:46.884472336Z" level=info msg="RemovePodSandbox \"607b5d65c86e049958905ccd7192628f0cd79f8b84e04bad2f19f842144b3d6a\" returns successfully" Jul 12 00:27:46.885779 env[1913]: time="2025-07-12T00:27:46.885735908Z" level=info msg="StopPodSandbox for \"a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431\"" Jul 12 00:27:47.073471 env[1913]: 2025-07-12 00:27:46.972 [WARNING][6334] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--120-k8s-calico--kube--controllers--b9c4d9bf9--swqxk-eth0", GenerateName:"calico-kube-controllers-b9c4d9bf9-", Namespace:"calico-system", SelfLink:"", UID:"0398604d-88a0-41c4-996f-ea9a3a6c7de4", ResourceVersion:"1137", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b9c4d9bf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-120", ContainerID:"d3ab911008595b5f310545caa2e0ebbb4f7225447171e7cd4e62d1efee834b32", Pod:"calico-kube-controllers-b9c4d9bf9-swqxk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.107.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali253581b1bee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:47.073471 env[1913]: 2025-07-12 00:27:46.973 [INFO][6334] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" Jul 12 00:27:47.073471 env[1913]: 2025-07-12 00:27:46.973 [INFO][6334] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" iface="eth0" netns="" Jul 12 00:27:47.073471 env[1913]: 2025-07-12 00:27:46.973 [INFO][6334] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" Jul 12 00:27:47.073471 env[1913]: 2025-07-12 00:27:46.973 [INFO][6334] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" Jul 12 00:27:47.073471 env[1913]: 2025-07-12 00:27:47.043 [INFO][6341] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" HandleID="k8s-pod-network.a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" Workload="ip--172--31--29--120-k8s-calico--kube--controllers--b9c4d9bf9--swqxk-eth0" Jul 12 00:27:47.073471 env[1913]: 2025-07-12 00:27:47.043 [INFO][6341] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:47.073471 env[1913]: 2025-07-12 00:27:47.044 [INFO][6341] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:47.073471 env[1913]: 2025-07-12 00:27:47.063 [WARNING][6341] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" HandleID="k8s-pod-network.a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" Workload="ip--172--31--29--120-k8s-calico--kube--controllers--b9c4d9bf9--swqxk-eth0" Jul 12 00:27:47.073471 env[1913]: 2025-07-12 00:27:47.063 [INFO][6341] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" HandleID="k8s-pod-network.a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" Workload="ip--172--31--29--120-k8s-calico--kube--controllers--b9c4d9bf9--swqxk-eth0" Jul 12 00:27:47.073471 env[1913]: 2025-07-12 00:27:47.067 [INFO][6341] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:47.073471 env[1913]: 2025-07-12 00:27:47.070 [INFO][6334] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" Jul 12 00:27:47.075503 env[1913]: time="2025-07-12T00:27:47.074394453Z" level=info msg="TearDown network for sandbox \"a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431\" successfully" Jul 12 00:27:47.075503 env[1913]: time="2025-07-12T00:27:47.074478657Z" level=info msg="StopPodSandbox for \"a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431\" returns successfully" Jul 12 00:27:47.076208 env[1913]: time="2025-07-12T00:27:47.076151972Z" level=info msg="RemovePodSandbox for \"a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431\"" Jul 12 00:27:47.076546 env[1913]: time="2025-07-12T00:27:47.076474534Z" level=info msg="Forcibly stopping sandbox \"a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431\"" Jul 12 00:27:47.269878 env[1913]: 2025-07-12 00:27:47.182 [WARNING][6355] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--120-k8s-calico--kube--controllers--b9c4d9bf9--swqxk-eth0", GenerateName:"calico-kube-controllers-b9c4d9bf9-", Namespace:"calico-system", SelfLink:"", UID:"0398604d-88a0-41c4-996f-ea9a3a6c7de4", ResourceVersion:"1137", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 26, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b9c4d9bf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-120", ContainerID:"d3ab911008595b5f310545caa2e0ebbb4f7225447171e7cd4e62d1efee834b32", Pod:"calico-kube-controllers-b9c4d9bf9-swqxk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.107.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali253581b1bee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:27:47.269878 env[1913]: 2025-07-12 00:27:47.184 [INFO][6355] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" Jul 12 00:27:47.269878 env[1913]: 2025-07-12 00:27:47.184 [INFO][6355] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" iface="eth0" netns="" Jul 12 00:27:47.269878 env[1913]: 2025-07-12 00:27:47.184 [INFO][6355] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" Jul 12 00:27:47.269878 env[1913]: 2025-07-12 00:27:47.184 [INFO][6355] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" Jul 12 00:27:47.269878 env[1913]: 2025-07-12 00:27:47.246 [INFO][6362] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" HandleID="k8s-pod-network.a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" Workload="ip--172--31--29--120-k8s-calico--kube--controllers--b9c4d9bf9--swqxk-eth0" Jul 12 00:27:47.269878 env[1913]: 2025-07-12 00:27:47.247 [INFO][6362] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:27:47.269878 env[1913]: 2025-07-12 00:27:47.247 [INFO][6362] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:27:47.269878 env[1913]: 2025-07-12 00:27:47.261 [WARNING][6362] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" HandleID="k8s-pod-network.a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" Workload="ip--172--31--29--120-k8s-calico--kube--controllers--b9c4d9bf9--swqxk-eth0" Jul 12 00:27:47.269878 env[1913]: 2025-07-12 00:27:47.261 [INFO][6362] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" HandleID="k8s-pod-network.a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" Workload="ip--172--31--29--120-k8s-calico--kube--controllers--b9c4d9bf9--swqxk-eth0" Jul 12 00:27:47.269878 env[1913]: 2025-07-12 00:27:47.264 [INFO][6362] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:27:47.269878 env[1913]: 2025-07-12 00:27:47.266 [INFO][6355] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431" Jul 12 00:27:47.271385 env[1913]: time="2025-07-12T00:27:47.271332526Z" level=info msg="TearDown network for sandbox \"a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431\" successfully" Jul 12 00:27:47.279108 env[1913]: time="2025-07-12T00:27:47.279046546Z" level=info msg="RemovePodSandbox \"a0cdfcc7960b0c392019359627a9ce0478be7f02ea424ca6f79398f6c90f2431\" returns successfully" Jul 12 00:27:47.516699 systemd[1]: Started sshd@23-172.31.29.120:22-147.75.109.163:50444.service. Jul 12 00:27:47.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.29.120:22-147.75.109.163:50444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:47.520194 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 12 00:27:47.520309 kernel: audit: type=1130 audit(1752280067.516:588): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.29.120:22-147.75.109.163:50444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:47.711000 audit[6368]: USER_ACCT pid=6368 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:47.723438 sshd[6368]: Accepted publickey for core from 147.75.109.163 port 50444 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:47.726362 sshd[6368]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:47.724000 audit[6368]: CRED_ACQ pid=6368 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:47.748076 kernel: audit: type=1101 audit(1752280067.711:589): pid=6368 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:47.748264 kernel: audit: type=1103 audit(1752280067.724:590): pid=6368 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:47.756838 systemd[1]: Started session-24.scope. Jul 12 00:27:47.757626 systemd-logind[1905]: New session 24 of user core. Jul 12 00:27:47.776303 kernel: audit: type=1006 audit(1752280067.724:591): pid=6368 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jul 12 00:27:47.724000 audit[6368]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcda91050 a2=3 a3=1 items=0 ppid=1 pid=6368 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:47.794595 kernel: audit: type=1300 audit(1752280067.724:591): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcda91050 a2=3 a3=1 items=0 ppid=1 pid=6368 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:47.724000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:27:47.813848 kernel: audit: type=1327 audit(1752280067.724:591): proctitle=737368643A20636F7265205B707269765D Jul 12 00:27:47.813955 kernel: audit: type=1105 audit(1752280067.803:592): pid=6368 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:47.803000 audit[6368]: USER_START pid=6368 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:47.806000 audit[6371]: CRED_ACQ pid=6371 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:47.834495 kernel: audit: type=1103 audit(1752280067.806:593): pid=6371 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:48.093460 sshd[6368]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:48.094000 audit[6368]: USER_END pid=6368 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:48.111258 systemd-logind[1905]: Session 24 logged out. Waiting for processes to exit. Jul 12 00:27:48.112061 systemd[1]: sshd@23-172.31.29.120:22-147.75.109.163:50444.service: Deactivated successfully. Jul 12 00:27:48.113660 systemd[1]: session-24.scope: Deactivated successfully. Jul 12 00:27:48.117182 systemd-logind[1905]: Removed session 24. Jul 12 00:27:48.107000 audit[6368]: CRED_DISP pid=6368 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:48.130488 kernel: audit: type=1106 audit(1752280068.094:594): pid=6368 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:48.130646 kernel: audit: type=1104 audit(1752280068.107:595): pid=6368 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:48.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.29.120:22-147.75.109.163:50444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:52.213442 systemd[1]: run-containerd-runc-k8s.io-e9bf6cbf89b0e5a8b36a6e2da654e504313ac059980a57e9e15a289627fb7b83-runc.cUsMXs.mount: Deactivated successfully. Jul 12 00:27:53.119779 systemd[1]: Started sshd@24-172.31.29.120:22-147.75.109.163:50448.service. Jul 12 00:27:53.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.29.120:22-147.75.109.163:50448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:53.122606 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 12 00:27:53.122708 kernel: audit: type=1130 audit(1752280073.119:597): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.29.120:22-147.75.109.163:50448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:53.307856 sshd[6402]: Accepted publickey for core from 147.75.109.163 port 50448 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:53.306000 audit[6402]: USER_ACCT pid=6402 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:53.321118 sshd[6402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:53.319000 audit[6402]: CRED_ACQ pid=6402 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:53.331395 kernel: audit: type=1101 audit(1752280073.306:598): pid=6402 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:53.331538 kernel: audit: type=1103 audit(1752280073.319:599): pid=6402 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:53.338037 kernel: audit: type=1006 audit(1752280073.319:600): pid=6402 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jul 12 00:27:53.319000 audit[6402]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffbac96a0 a2=3 a3=1 items=0 ppid=1 pid=6402 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:53.350523 kernel: audit: type=1300 audit(1752280073.319:600): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffbac96a0 a2=3 a3=1 items=0 ppid=1 pid=6402 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:53.357982 systemd-logind[1905]: New session 25 of user core. Jul 12 00:27:53.359926 systemd[1]: Started session-25.scope. Jul 12 00:27:53.319000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:27:53.364898 kernel: audit: type=1327 audit(1752280073.319:600): proctitle=737368643A20636F7265205B707269765D Jul 12 00:27:53.376000 audit[6402]: USER_START pid=6402 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:53.391000 audit[6405]: CRED_ACQ pid=6405 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:53.406849 kernel: audit: type=1105 audit(1752280073.376:601): pid=6402 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:53.406971 kernel: audit: type=1103 audit(1752280073.391:602): pid=6405 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:53.707600 sshd[6402]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:53.709000 audit[6402]: USER_END pid=6402 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:53.722568 systemd[1]: sshd@24-172.31.29.120:22-147.75.109.163:50448.service: Deactivated successfully. Jul 12 00:27:53.724204 systemd[1]: session-25.scope: Deactivated successfully. Jul 12 00:27:53.709000 audit[6402]: CRED_DISP pid=6402 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:53.726946 systemd-logind[1905]: Session 25 logged out. Waiting for processes to exit. Jul 12 00:27:53.736259 kernel: audit: type=1106 audit(1752280073.709:603): pid=6402 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:53.736391 kernel: audit: type=1104 audit(1752280073.709:604): pid=6402 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:53.738515 systemd-logind[1905]: Removed session 25. Jul 12 00:27:53.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.29.120:22-147.75.109.163:50448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:55.184611 systemd[1]: run-containerd-runc-k8s.io-e9bf6cbf89b0e5a8b36a6e2da654e504313ac059980a57e9e15a289627fb7b83-runc.rxxhp3.mount: Deactivated successfully. Jul 12 00:27:55.251694 systemd[1]: run-containerd-runc-k8s.io-2e09855f4a8f003eb5fdb3d582c84cdbc012a20bcaf513ba338317d484a84415-runc.szyv51.mount: Deactivated successfully. Jul 12 00:27:58.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.29.120:22-147.75.109.163:33878 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:58.733901 systemd[1]: Started sshd@25-172.31.29.120:22-147.75.109.163:33878.service. Jul 12 00:27:58.738261 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 12 00:27:58.738427 kernel: audit: type=1130 audit(1752280078.733:606): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.29.120:22-147.75.109.163:33878 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:58.925000 audit[6454]: USER_ACCT pid=6454 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:58.927042 sshd[6454]: Accepted publickey for core from 147.75.109.163 port 33878 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:27:58.930663 sshd[6454]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:27:58.945623 systemd-logind[1905]: New session 26 of user core. Jul 12 00:27:58.948733 systemd[1]: Started session-26.scope. Jul 12 00:27:58.925000 audit[6454]: CRED_ACQ pid=6454 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:58.963504 kernel: audit: type=1101 audit(1752280078.925:607): pid=6454 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:58.963667 kernel: audit: type=1103 audit(1752280078.925:608): pid=6454 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:58.983192 kernel: audit: type=1006 audit(1752280078.925:609): pid=6454 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jul 12 00:27:58.925000 audit[6454]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff306d4f0 a2=3 a3=1 items=0 ppid=1 pid=6454 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:59.000648 kernel: audit: type=1300 audit(1752280078.925:609): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff306d4f0 a2=3 a3=1 items=0 ppid=1 pid=6454 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:27:58.925000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:27:59.017708 kernel: audit: type=1327 audit(1752280078.925:609): proctitle=737368643A20636F7265205B707269765D Jul 12 00:27:58.995000 audit[6454]: USER_START pid=6454 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:59.032743 kernel: audit: type=1105 audit(1752280078.995:610): pid=6454 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:59.001000 audit[6457]: CRED_ACQ pid=6457 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:59.051545 kernel: audit: type=1103 audit(1752280079.001:611): pid=6457 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:59.272581 sshd[6454]: pam_unix(sshd:session): session closed for user core Jul 12 00:27:59.273000 audit[6454]: USER_END pid=6454 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:59.289788 systemd[1]: sshd@25-172.31.29.120:22-147.75.109.163:33878.service: Deactivated successfully. Jul 12 00:27:59.293268 systemd[1]: session-26.scope: Deactivated successfully. Jul 12 00:27:59.293314 systemd-logind[1905]: Session 26 logged out. Waiting for processes to exit. Jul 12 00:27:59.273000 audit[6454]: CRED_DISP pid=6454 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:59.305484 kernel: audit: type=1106 audit(1752280079.273:612): pid=6454 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:59.305680 kernel: audit: type=1104 audit(1752280079.273:613): pid=6454 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:27:59.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.29.120:22-147.75.109.163:33878 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:27:59.312398 systemd-logind[1905]: Removed session 26. Jul 12 00:28:04.301337 systemd[1]: Started sshd@26-172.31.29.120:22-147.75.109.163:33886.service. Jul 12 00:28:04.313678 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 12 00:28:04.313831 kernel: audit: type=1130 audit(1752280084.301:615): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.29.120:22-147.75.109.163:33886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:28:04.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.29.120:22-147.75.109.163:33886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:28:04.502683 sshd[6475]: Accepted publickey for core from 147.75.109.163 port 33886 ssh2: RSA SHA256:hAayEOBHnTpwll2xPQSU8cSp7XCWn/pXChvPbqogNKA Jul 12 00:28:04.501000 audit[6475]: USER_ACCT pid=6475 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:28:04.515044 sshd[6475]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:28:04.512000 audit[6475]: CRED_ACQ pid=6475 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:28:04.529364 systemd[1]: Started session-27.scope. Jul 12 00:28:04.532704 kernel: audit: type=1101 audit(1752280084.501:616): pid=6475 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:28:04.532864 kernel: audit: type=1103 audit(1752280084.512:617): pid=6475 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:28:04.532785 systemd-logind[1905]: New session 27 of user core. Jul 12 00:28:04.565130 kernel: audit: type=1006 audit(1752280084.513:618): pid=6475 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jul 12 00:28:04.513000 audit[6475]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc2eb1c00 a2=3 a3=1 items=0 ppid=1 pid=6475 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:28:04.581547 kernel: audit: type=1300 audit(1752280084.513:618): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc2eb1c00 a2=3 a3=1 items=0 ppid=1 pid=6475 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:28:04.513000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:28:04.595651 kernel: audit: type=1327 audit(1752280084.513:618): proctitle=737368643A20636F7265205B707269765D Jul 12 00:28:04.549000 audit[6475]: USER_START pid=6475 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:28:04.609619 kernel: audit: type=1105 audit(1752280084.549:619): pid=6475 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:28:04.552000 audit[6479]: CRED_ACQ pid=6479 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:28:04.629437 kernel: audit: type=1103 audit(1752280084.552:620): pid=6479 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:28:04.898483 sshd[6475]: pam_unix(sshd:session): session closed for user core Jul 12 00:28:04.899000 audit[6475]: USER_END pid=6475 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:28:04.915453 systemd-logind[1905]: Session 27 logged out. Waiting for processes to exit. Jul 12 00:28:04.916926 systemd[1]: sshd@26-172.31.29.120:22-147.75.109.163:33886.service: Deactivated successfully. Jul 12 00:28:04.919594 systemd[1]: session-27.scope: Deactivated successfully. Jul 12 00:28:04.899000 audit[6475]: CRED_DISP pid=6475 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:28:04.929596 kernel: audit: type=1106 audit(1752280084.899:621): pid=6475 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:28:04.929717 kernel: audit: type=1104 audit(1752280084.899:622): pid=6475 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Jul 12 00:28:04.931101 systemd-logind[1905]: Removed session 27. Jul 12 00:28:04.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.29.120:22-147.75.109.163:33886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:28:14.477961 systemd[1]: run-containerd-runc-k8s.io-42b9d6dc0035ca32ad37cd639945c7b3f5acda86cdae0a63724ad380d59dcb45-runc.mSVwGZ.mount: Deactivated successfully. Jul 12 00:28:18.450551 env[1913]: time="2025-07-12T00:28:18.450445582Z" level=info msg="shim disconnected" id=c282e48d63ff3a66722e8c7e4666d70a83d1f6a6d693739a252fab0b8ba0abe8 Jul 12 00:28:18.451386 env[1913]: time="2025-07-12T00:28:18.451285635Z" level=warning msg="cleaning up after shim disconnected" id=c282e48d63ff3a66722e8c7e4666d70a83d1f6a6d693739a252fab0b8ba0abe8 namespace=k8s.io Jul 12 00:28:18.451563 env[1913]: time="2025-07-12T00:28:18.451531732Z" level=info msg="cleaning up dead shim" Jul 12 00:28:18.452296 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c282e48d63ff3a66722e8c7e4666d70a83d1f6a6d693739a252fab0b8ba0abe8-rootfs.mount: Deactivated successfully. Jul 12 00:28:18.471633 env[1913]: time="2025-07-12T00:28:18.471555010Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:28:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6544 runtime=io.containerd.runc.v2\n" Jul 12 00:28:18.816660 systemd[1]: run-containerd-runc-k8s.io-2e09855f4a8f003eb5fdb3d582c84cdbc012a20bcaf513ba338317d484a84415-runc.n0Mz2K.mount: Deactivated successfully. Jul 12 00:28:19.137675 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f9f138908834a242680da4b5ccde53b90ae8f7aae8ee0c7cc7cce07c4f2f541-rootfs.mount: Deactivated successfully. Jul 12 00:28:19.141648 env[1913]: time="2025-07-12T00:28:19.141560008Z" level=info msg="shim disconnected" id=8f9f138908834a242680da4b5ccde53b90ae8f7aae8ee0c7cc7cce07c4f2f541 Jul 12 00:28:19.141953 env[1913]: time="2025-07-12T00:28:19.141906222Z" level=warning msg="cleaning up after shim disconnected" id=8f9f138908834a242680da4b5ccde53b90ae8f7aae8ee0c7cc7cce07c4f2f541 namespace=k8s.io Jul 12 00:28:19.142110 env[1913]: time="2025-07-12T00:28:19.142080763Z" level=info msg="cleaning up dead shim" Jul 12 00:28:19.156443 env[1913]: time="2025-07-12T00:28:19.156385712Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:28:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6593 runtime=io.containerd.runc.v2\n" Jul 12 00:28:19.366296 kubelet[2983]: I0712 00:28:19.365421 2983 scope.go:117] "RemoveContainer" containerID="8f9f138908834a242680da4b5ccde53b90ae8f7aae8ee0c7cc7cce07c4f2f541" Jul 12 00:28:19.370273 env[1913]: time="2025-07-12T00:28:19.370160007Z" level=info msg="CreateContainer within sandbox \"8f9809ac5a26935b231c54e0ca73743f80522bca9038941772e99f534fe54d39\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 12 00:28:19.371526 kubelet[2983]: I0712 00:28:19.371492 2983 scope.go:117] "RemoveContainer" containerID="c282e48d63ff3a66722e8c7e4666d70a83d1f6a6d693739a252fab0b8ba0abe8" Jul 12 00:28:19.375625 env[1913]: time="2025-07-12T00:28:19.375572418Z" level=info msg="CreateContainer within sandbox \"8f03369749361fd5862a9ccd9f9ec07ba37834258f7a28385ffb26685e76ec36\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 12 00:28:19.413603 env[1913]: time="2025-07-12T00:28:19.413431123Z" level=info msg="CreateContainer within sandbox \"8f9809ac5a26935b231c54e0ca73743f80522bca9038941772e99f534fe54d39\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"c548fb4d0e1786b3e5611d236aa5a045e0e76c48bdbc7aa70ec6660410edfa42\"" Jul 12 00:28:19.414869 env[1913]: time="2025-07-12T00:28:19.414797150Z" level=info msg="StartContainer for \"c548fb4d0e1786b3e5611d236aa5a045e0e76c48bdbc7aa70ec6660410edfa42\"" Jul 12 00:28:19.421932 env[1913]: time="2025-07-12T00:28:19.421824314Z" level=info msg="CreateContainer within sandbox \"8f03369749361fd5862a9ccd9f9ec07ba37834258f7a28385ffb26685e76ec36\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"f8b8354f81949234f13b823dee506fd2c348c27280f0daee06e92760c3469bbd\"" Jul 12 00:28:19.422784 env[1913]: time="2025-07-12T00:28:19.422732142Z" level=info msg="StartContainer for \"f8b8354f81949234f13b823dee506fd2c348c27280f0daee06e92760c3469bbd\"" Jul 12 00:28:19.452520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2591594699.mount: Deactivated successfully. Jul 12 00:28:19.615211 env[1913]: time="2025-07-12T00:28:19.615145861Z" level=info msg="StartContainer for \"c548fb4d0e1786b3e5611d236aa5a045e0e76c48bdbc7aa70ec6660410edfa42\" returns successfully" Jul 12 00:28:19.636794 env[1913]: time="2025-07-12T00:28:19.633712667Z" level=info msg="StartContainer for \"f8b8354f81949234f13b823dee506fd2c348c27280f0daee06e92760c3469bbd\" returns successfully" Jul 12 00:28:23.826508 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-292358489d61335d1708557931706312612edee8479aed650b2afd8ef61bb125-rootfs.mount: Deactivated successfully. Jul 12 00:28:23.827881 env[1913]: time="2025-07-12T00:28:23.827819314Z" level=info msg="shim disconnected" id=292358489d61335d1708557931706312612edee8479aed650b2afd8ef61bb125 Jul 12 00:28:23.828981 env[1913]: time="2025-07-12T00:28:23.828907899Z" level=warning msg="cleaning up after shim disconnected" id=292358489d61335d1708557931706312612edee8479aed650b2afd8ef61bb125 namespace=k8s.io Jul 12 00:28:23.829316 env[1913]: time="2025-07-12T00:28:23.829187465Z" level=info msg="cleaning up dead shim" Jul 12 00:28:23.847281 env[1913]: time="2025-07-12T00:28:23.847159259Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:28:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6692 runtime=io.containerd.runc.v2\n" Jul 12 00:28:24.394375 kubelet[2983]: I0712 00:28:24.394321 2983 scope.go:117] "RemoveContainer" containerID="292358489d61335d1708557931706312612edee8479aed650b2afd8ef61bb125" Jul 12 00:28:24.397700 env[1913]: time="2025-07-12T00:28:24.397636665Z" level=info msg="CreateContainer within sandbox \"8cd45536d37bd6ba7057f32b5199b1e4f4080e88be55fbd7de5adf7230c5741d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 12 00:28:24.432716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2627489156.mount: Deactivated successfully. Jul 12 00:28:24.434745 env[1913]: time="2025-07-12T00:28:24.434671405Z" level=info msg="CreateContainer within sandbox \"8cd45536d37bd6ba7057f32b5199b1e4f4080e88be55fbd7de5adf7230c5741d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"9f392e0487da0091b4330c391528576ce9dc2b098642ad99e42784af656dde99\"" Jul 12 00:28:24.435743 env[1913]: time="2025-07-12T00:28:24.435690811Z" level=info msg="StartContainer for \"9f392e0487da0091b4330c391528576ce9dc2b098642ad99e42784af656dde99\"" Jul 12 00:28:24.582939 env[1913]: time="2025-07-12T00:28:24.582877761Z" level=info msg="StartContainer for \"9f392e0487da0091b4330c391528576ce9dc2b098642ad99e42784af656dde99\" returns successfully" Jul 12 00:28:25.315181 systemd[1]: run-containerd-runc-k8s.io-2e09855f4a8f003eb5fdb3d582c84cdbc012a20bcaf513ba338317d484a84415-runc.vxbIvq.mount: Deactivated successfully. Jul 12 00:28:26.610391 kubelet[2983]: E0712 00:28:26.610328 2983 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-120?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 12 00:28:31.102790 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c548fb4d0e1786b3e5611d236aa5a045e0e76c48bdbc7aa70ec6660410edfa42-rootfs.mount: Deactivated successfully. Jul 12 00:28:31.114562 env[1913]: time="2025-07-12T00:28:31.114498639Z" level=info msg="shim disconnected" id=c548fb4d0e1786b3e5611d236aa5a045e0e76c48bdbc7aa70ec6660410edfa42 Jul 12 00:28:31.115380 env[1913]: time="2025-07-12T00:28:31.115341403Z" level=warning msg="cleaning up after shim disconnected" id=c548fb4d0e1786b3e5611d236aa5a045e0e76c48bdbc7aa70ec6660410edfa42 namespace=k8s.io Jul 12 00:28:31.115511 env[1913]: time="2025-07-12T00:28:31.115482523Z" level=info msg="cleaning up dead shim" Jul 12 00:28:31.130321 env[1913]: time="2025-07-12T00:28:31.130262528Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:28:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6790 runtime=io.containerd.runc.v2\n" Jul 12 00:28:31.417859 kubelet[2983]: I0712 00:28:31.417789 2983 scope.go:117] "RemoveContainer" containerID="8f9f138908834a242680da4b5ccde53b90ae8f7aae8ee0c7cc7cce07c4f2f541" Jul 12 00:28:31.419046 kubelet[2983]: I0712 00:28:31.419014 2983 scope.go:117] "RemoveContainer" containerID="c548fb4d0e1786b3e5611d236aa5a045e0e76c48bdbc7aa70ec6660410edfa42" Jul 12 00:28:31.421911 kubelet[2983]: E0712 00:28:31.421852 2983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-5bf8dfcb4-9rrnd_tigera-operator(d1fdad17-8e7b-489f-a66c-f53b55686f7a)\"" pod="tigera-operator/tigera-operator-5bf8dfcb4-9rrnd" podUID="d1fdad17-8e7b-489f-a66c-f53b55686f7a" Jul 12 00:28:31.422187 env[1913]: time="2025-07-12T00:28:31.421859883Z" level=info msg="RemoveContainer for \"8f9f138908834a242680da4b5ccde53b90ae8f7aae8ee0c7cc7cce07c4f2f541\"" Jul 12 00:28:31.429465 env[1913]: time="2025-07-12T00:28:31.429389152Z" level=info msg="RemoveContainer for \"8f9f138908834a242680da4b5ccde53b90ae8f7aae8ee0c7cc7cce07c4f2f541\" returns successfully" Jul 12 00:28:33.423663 update_engine[1906]: I0712 00:28:33.423513 1906 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 12 00:28:33.423663 update_engine[1906]: I0712 00:28:33.423581 1906 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 12 00:28:33.428299 update_engine[1906]: I0712 00:28:33.426445 1906 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 12 00:28:33.429121 update_engine[1906]: I0712 00:28:33.429081 1906 omaha_request_params.cc:62] Current group set to lts Jul 12 00:28:33.431931 update_engine[1906]: I0712 00:28:33.431874 1906 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 12 00:28:33.431931 update_engine[1906]: I0712 00:28:33.431916 1906 update_attempter.cc:643] Scheduling an action processor start. Jul 12 00:28:33.432137 update_engine[1906]: I0712 00:28:33.431949 1906 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 12 00:28:33.432137 update_engine[1906]: I0712 00:28:33.432011 1906 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 12 00:28:33.433647 update_engine[1906]: I0712 00:28:33.433598 1906 omaha_request_action.cc:270] Posting an Omaha request to disabled Jul 12 00:28:33.433647 update_engine[1906]: I0712 00:28:33.433636 1906 omaha_request_action.cc:271] Request: Jul 12 00:28:33.433647 update_engine[1906]: Jul 12 00:28:33.433647 update_engine[1906]: Jul 12 00:28:33.433647 update_engine[1906]: Jul 12 00:28:33.433647 update_engine[1906]: Jul 12 00:28:33.433647 update_engine[1906]: Jul 12 00:28:33.433647 update_engine[1906]: Jul 12 00:28:33.433647 update_engine[1906]: Jul 12 00:28:33.433647 update_engine[1906]: Jul 12 00:28:33.433647 update_engine[1906]: I0712 00:28:33.433651 1906 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 12 00:28:33.435112 locksmithd[1977]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 12 00:28:33.442545 update_engine[1906]: I0712 00:28:33.442497 1906 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 12 00:28:33.442932 update_engine[1906]: I0712 00:28:33.442900 1906 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 12 00:28:33.452934 update_engine[1906]: E0712 00:28:33.452881 1906 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 12 00:28:33.453082 update_engine[1906]: I0712 00:28:33.453032 1906 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 12 00:28:36.612439 kubelet[2983]: E0712 00:28:36.611808 2983 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-120?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"