Sep 6 00:02:55.981403 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 6 00:02:55.981443 kernel: Linux version 5.15.190-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Sep 5 23:00:12 -00 2025 Sep 6 00:02:55.981466 kernel: efi: EFI v2.70 by EDK II Sep 6 00:02:55.981482 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x716fcf98 Sep 6 00:02:55.981496 kernel: ACPI: Early table checksum verification disabled Sep 6 00:02:55.981509 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 6 00:02:55.981526 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 6 00:02:55.981540 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 6 00:02:55.981554 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 6 00:02:55.981568 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 6 00:02:55.981585 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 6 00:02:55.981600 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 6 00:02:55.981614 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 6 00:02:55.981629 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 6 00:02:55.981645 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 6 00:02:55.981664 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 6 00:02:55.981679 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 6 00:02:55.981693 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 6 00:02:55.981708 kernel: printk: bootconsole [uart0] enabled Sep 6 00:02:55.981723 kernel: NUMA: Failed to initialise from firmware Sep 6 00:02:55.981738 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 6 00:02:55.981753 kernel: NUMA: NODE_DATA [mem 0x4b5843900-0x4b5848fff] Sep 6 00:02:55.981767 kernel: Zone ranges: Sep 6 00:02:55.981782 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 6 00:02:55.981796 kernel: DMA32 empty Sep 6 00:02:55.981811 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 6 00:02:55.981829 kernel: Movable zone start for each node Sep 6 00:02:55.981844 kernel: Early memory node ranges Sep 6 00:02:55.981859 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 6 00:02:55.981873 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 6 00:02:55.981888 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 6 00:02:55.981903 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 6 00:02:55.981917 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 6 00:02:55.981932 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 6 00:02:55.981947 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 6 00:02:55.981975 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 6 00:02:55.982008 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 6 00:02:55.982026 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 6 00:02:55.982046 kernel: psci: probing for conduit method from ACPI. Sep 6 00:02:55.982061 kernel: psci: PSCIv1.0 detected in firmware. Sep 6 00:02:55.982083 kernel: psci: Using standard PSCI v0.2 function IDs Sep 6 00:02:55.982099 kernel: psci: Trusted OS migration not required Sep 6 00:02:55.982114 kernel: psci: SMC Calling Convention v1.1 Sep 6 00:02:55.982135 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 6 00:02:55.982151 kernel: ACPI: SRAT not present Sep 6 00:02:55.982167 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Sep 6 00:02:55.982228 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Sep 6 00:02:55.982249 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 6 00:02:55.982265 kernel: Detected PIPT I-cache on CPU0 Sep 6 00:02:55.982281 kernel: CPU features: detected: GIC system register CPU interface Sep 6 00:02:55.982297 kernel: CPU features: detected: Spectre-v2 Sep 6 00:02:55.982313 kernel: CPU features: detected: Spectre-v3a Sep 6 00:02:55.982328 kernel: CPU features: detected: Spectre-BHB Sep 6 00:02:55.982344 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 6 00:02:55.982365 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 6 00:02:55.982382 kernel: CPU features: detected: ARM erratum 1742098 Sep 6 00:02:55.982398 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 6 00:02:55.982413 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Sep 6 00:02:55.982428 kernel: Policy zone: Normal Sep 6 00:02:55.982446 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5cb382ab59aa1336098b36da02e2d4491706a6fda80ee56c4ff8582cce9206a4 Sep 6 00:02:55.982463 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 00:02:55.982479 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 6 00:02:55.982494 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 00:02:55.982510 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 00:02:55.982529 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Sep 6 00:02:55.982546 kernel: Memory: 3824460K/4030464K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 206004K reserved, 0K cma-reserved) Sep 6 00:02:55.982562 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 6 00:02:55.982577 kernel: trace event string verifier disabled Sep 6 00:02:55.982593 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 6 00:02:55.982609 kernel: rcu: RCU event tracing is enabled. Sep 6 00:02:55.982625 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 6 00:02:55.982641 kernel: Trampoline variant of Tasks RCU enabled. Sep 6 00:02:55.982656 kernel: Tracing variant of Tasks RCU enabled. Sep 6 00:02:55.982672 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 00:02:55.982687 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 6 00:02:55.982703 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 6 00:02:55.982722 kernel: GICv3: 96 SPIs implemented Sep 6 00:02:55.982737 kernel: GICv3: 0 Extended SPIs implemented Sep 6 00:02:55.982753 kernel: GICv3: Distributor has no Range Selector support Sep 6 00:02:55.982768 kernel: Root IRQ handler: gic_handle_irq Sep 6 00:02:55.982783 kernel: GICv3: 16 PPIs implemented Sep 6 00:02:55.982798 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 6 00:02:55.982813 kernel: ACPI: SRAT not present Sep 6 00:02:55.982828 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 6 00:02:55.982844 kernel: ITS@0x0000000010080000: allocated 8192 Devices @400090000 (indirect, esz 8, psz 64K, shr 1) Sep 6 00:02:55.982860 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000a0000 (flat, esz 8, psz 64K, shr 1) Sep 6 00:02:55.982875 kernel: GICv3: using LPI property table @0x00000004000b0000 Sep 6 00:02:55.982894 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 6 00:02:55.982910 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Sep 6 00:02:55.982925 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 6 00:02:55.982941 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 6 00:02:55.982956 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 6 00:02:55.982972 kernel: Console: colour dummy device 80x25 Sep 6 00:02:55.982988 kernel: printk: console [tty1] enabled Sep 6 00:02:55.983004 kernel: ACPI: Core revision 20210730 Sep 6 00:02:55.983020 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 6 00:02:55.983036 kernel: pid_max: default: 32768 minimum: 301 Sep 6 00:02:55.983056 kernel: LSM: Security Framework initializing Sep 6 00:02:55.983072 kernel: SELinux: Initializing. Sep 6 00:02:55.983088 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 00:02:55.983104 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 00:02:55.983120 kernel: rcu: Hierarchical SRCU implementation. Sep 6 00:02:55.983136 kernel: Platform MSI: ITS@0x10080000 domain created Sep 6 00:02:55.983152 kernel: PCI/MSI: ITS@0x10080000 domain created Sep 6 00:02:55.983168 kernel: Remapping and enabling EFI services. Sep 6 00:02:55.983206 kernel: smp: Bringing up secondary CPUs ... Sep 6 00:02:55.983225 kernel: Detected PIPT I-cache on CPU1 Sep 6 00:02:55.983247 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 6 00:02:55.983263 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Sep 6 00:02:55.983279 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 6 00:02:55.983295 kernel: smp: Brought up 1 node, 2 CPUs Sep 6 00:02:55.983311 kernel: SMP: Total of 2 processors activated. Sep 6 00:02:55.983327 kernel: CPU features: detected: 32-bit EL0 Support Sep 6 00:02:55.983343 kernel: CPU features: detected: 32-bit EL1 Support Sep 6 00:02:55.983358 kernel: CPU features: detected: CRC32 instructions Sep 6 00:02:55.983374 kernel: CPU: All CPU(s) started at EL1 Sep 6 00:02:55.983394 kernel: alternatives: patching kernel code Sep 6 00:02:55.983410 kernel: devtmpfs: initialized Sep 6 00:02:55.983436 kernel: KASLR disabled due to lack of seed Sep 6 00:02:55.983457 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 00:02:55.983473 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 6 00:02:55.983490 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 00:02:55.983506 kernel: SMBIOS 3.0.0 present. Sep 6 00:02:55.983522 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 6 00:02:55.983539 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 00:02:55.983555 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 6 00:02:55.983572 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 6 00:02:55.983592 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 6 00:02:55.983609 kernel: audit: initializing netlink subsys (disabled) Sep 6 00:02:55.983626 kernel: audit: type=2000 audit(0.293:1): state=initialized audit_enabled=0 res=1 Sep 6 00:02:55.983642 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 00:02:55.983659 kernel: cpuidle: using governor menu Sep 6 00:02:55.983679 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 6 00:02:55.983696 kernel: ASID allocator initialised with 32768 entries Sep 6 00:02:55.983712 kernel: ACPI: bus type PCI registered Sep 6 00:02:55.983728 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 00:02:55.983744 kernel: Serial: AMBA PL011 UART driver Sep 6 00:02:55.983761 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 6 00:02:55.983777 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Sep 6 00:02:55.983794 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 00:02:55.983810 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Sep 6 00:02:55.983830 kernel: cryptd: max_cpu_qlen set to 1000 Sep 6 00:02:55.983847 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 6 00:02:55.983863 kernel: ACPI: Added _OSI(Module Device) Sep 6 00:02:55.983879 kernel: ACPI: Added _OSI(Processor Device) Sep 6 00:02:55.983896 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 00:02:55.983912 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 6 00:02:55.983928 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 6 00:02:55.983945 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 6 00:02:55.983961 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 6 00:02:55.983978 kernel: ACPI: Interpreter enabled Sep 6 00:02:55.983998 kernel: ACPI: Using GIC for interrupt routing Sep 6 00:02:55.984014 kernel: ACPI: MCFG table detected, 1 entries Sep 6 00:02:55.984030 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 6 00:02:55.984333 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 6 00:02:55.984532 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 6 00:02:55.984720 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 6 00:02:55.984908 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 6 00:02:55.985100 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 6 00:02:55.985137 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 6 00:02:55.985157 kernel: acpiphp: Slot [1] registered Sep 6 00:02:55.985173 kernel: acpiphp: Slot [2] registered Sep 6 00:02:55.985207 kernel: acpiphp: Slot [3] registered Sep 6 00:02:55.985226 kernel: acpiphp: Slot [4] registered Sep 6 00:02:55.985242 kernel: acpiphp: Slot [5] registered Sep 6 00:02:55.985259 kernel: acpiphp: Slot [6] registered Sep 6 00:02:55.985275 kernel: acpiphp: Slot [7] registered Sep 6 00:02:55.985297 kernel: acpiphp: Slot [8] registered Sep 6 00:02:55.985313 kernel: acpiphp: Slot [9] registered Sep 6 00:02:55.985330 kernel: acpiphp: Slot [10] registered Sep 6 00:02:55.985346 kernel: acpiphp: Slot [11] registered Sep 6 00:02:55.985362 kernel: acpiphp: Slot [12] registered Sep 6 00:02:55.985378 kernel: acpiphp: Slot [13] registered Sep 6 00:02:55.985395 kernel: acpiphp: Slot [14] registered Sep 6 00:02:55.985411 kernel: acpiphp: Slot [15] registered Sep 6 00:02:55.985427 kernel: acpiphp: Slot [16] registered Sep 6 00:02:55.985447 kernel: acpiphp: Slot [17] registered Sep 6 00:02:55.985463 kernel: acpiphp: Slot [18] registered Sep 6 00:02:55.985480 kernel: acpiphp: Slot [19] registered Sep 6 00:02:55.985496 kernel: acpiphp: Slot [20] registered Sep 6 00:02:55.985512 kernel: acpiphp: Slot [21] registered Sep 6 00:02:55.985528 kernel: acpiphp: Slot [22] registered Sep 6 00:02:55.985545 kernel: acpiphp: Slot [23] registered Sep 6 00:02:55.985561 kernel: acpiphp: Slot [24] registered Sep 6 00:02:55.985577 kernel: acpiphp: Slot [25] registered Sep 6 00:02:55.985594 kernel: acpiphp: Slot [26] registered Sep 6 00:02:55.985614 kernel: acpiphp: Slot [27] registered Sep 6 00:02:55.985630 kernel: acpiphp: Slot [28] registered Sep 6 00:02:55.985646 kernel: acpiphp: Slot [29] registered Sep 6 00:02:55.985662 kernel: acpiphp: Slot [30] registered Sep 6 00:02:55.985679 kernel: acpiphp: Slot [31] registered Sep 6 00:02:55.985696 kernel: PCI host bridge to bus 0000:00 Sep 6 00:02:55.985893 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 6 00:02:55.986069 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 6 00:02:55.986270 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 6 00:02:55.986453 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 6 00:02:55.986669 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Sep 6 00:02:55.986881 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Sep 6 00:02:55.987082 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Sep 6 00:02:55.987346 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 6 00:02:55.987553 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Sep 6 00:02:55.987752 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 6 00:02:55.987963 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 6 00:02:55.988162 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Sep 6 00:02:56.007474 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Sep 6 00:02:56.007683 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Sep 6 00:02:56.007874 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 6 00:02:56.008073 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Sep 6 00:02:56.011035 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Sep 6 00:02:56.011334 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Sep 6 00:02:56.011532 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Sep 6 00:02:56.011733 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Sep 6 00:02:56.011915 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 6 00:02:56.012087 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 6 00:02:56.012292 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 6 00:02:56.012317 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 6 00:02:56.012334 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 6 00:02:56.012352 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 6 00:02:56.012368 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 6 00:02:56.012385 kernel: iommu: Default domain type: Translated Sep 6 00:02:56.012402 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 6 00:02:56.012419 kernel: vgaarb: loaded Sep 6 00:02:56.012435 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 6 00:02:56.012457 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 6 00:02:56.012474 kernel: PTP clock support registered Sep 6 00:02:56.012490 kernel: Registered efivars operations Sep 6 00:02:56.012507 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 6 00:02:56.012523 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 00:02:56.012540 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 00:02:56.012556 kernel: pnp: PnP ACPI init Sep 6 00:02:56.012751 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 6 00:02:56.012777 kernel: pnp: PnP ACPI: found 1 devices Sep 6 00:02:56.012798 kernel: NET: Registered PF_INET protocol family Sep 6 00:02:56.012815 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 6 00:02:56.012832 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 6 00:02:56.012848 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 00:02:56.012865 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 6 00:02:56.012882 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 6 00:02:56.012898 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 6 00:02:56.012915 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 00:02:56.012936 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 00:02:56.012952 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 00:02:56.012969 kernel: PCI: CLS 0 bytes, default 64 Sep 6 00:02:56.012985 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Sep 6 00:02:56.013002 kernel: kvm [1]: HYP mode not available Sep 6 00:02:56.013018 kernel: Initialise system trusted keyrings Sep 6 00:02:56.013035 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 6 00:02:56.013051 kernel: Key type asymmetric registered Sep 6 00:02:56.013068 kernel: Asymmetric key parser 'x509' registered Sep 6 00:02:56.013088 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 6 00:02:56.013105 kernel: io scheduler mq-deadline registered Sep 6 00:02:56.013139 kernel: io scheduler kyber registered Sep 6 00:02:56.013158 kernel: io scheduler bfq registered Sep 6 00:02:56.015412 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 6 00:02:56.015443 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 6 00:02:56.015461 kernel: ACPI: button: Power Button [PWRB] Sep 6 00:02:56.015478 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 6 00:02:56.015494 kernel: ACPI: button: Sleep Button [SLPB] Sep 6 00:02:56.015518 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 00:02:56.015535 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 6 00:02:56.015735 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 6 00:02:56.015759 kernel: printk: console [ttyS0] disabled Sep 6 00:02:56.015776 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 6 00:02:56.015793 kernel: printk: console [ttyS0] enabled Sep 6 00:02:56.015809 kernel: printk: bootconsole [uart0] disabled Sep 6 00:02:56.015825 kernel: thunder_xcv, ver 1.0 Sep 6 00:02:56.015842 kernel: thunder_bgx, ver 1.0 Sep 6 00:02:56.015862 kernel: nicpf, ver 1.0 Sep 6 00:02:56.015878 kernel: nicvf, ver 1.0 Sep 6 00:02:56.016092 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 6 00:02:56.016327 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-06T00:02:55 UTC (1757116975) Sep 6 00:02:56.016354 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 6 00:02:56.016371 kernel: NET: Registered PF_INET6 protocol family Sep 6 00:02:56.016388 kernel: Segment Routing with IPv6 Sep 6 00:02:56.016405 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 00:02:56.016427 kernel: NET: Registered PF_PACKET protocol family Sep 6 00:02:56.016444 kernel: Key type dns_resolver registered Sep 6 00:02:56.016460 kernel: registered taskstats version 1 Sep 6 00:02:56.016477 kernel: Loading compiled-in X.509 certificates Sep 6 00:02:56.016494 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.190-flatcar: 72ab5ba99c2368429c7a4d04fccfc5a39dd84386' Sep 6 00:02:56.016510 kernel: Key type .fscrypt registered Sep 6 00:02:56.016527 kernel: Key type fscrypt-provisioning registered Sep 6 00:02:56.016544 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 6 00:02:56.016560 kernel: ima: Allocated hash algorithm: sha1 Sep 6 00:02:56.016581 kernel: ima: No architecture policies found Sep 6 00:02:56.016598 kernel: clk: Disabling unused clocks Sep 6 00:02:56.016614 kernel: Freeing unused kernel memory: 36416K Sep 6 00:02:56.016631 kernel: Run /init as init process Sep 6 00:02:56.016647 kernel: with arguments: Sep 6 00:02:56.016664 kernel: /init Sep 6 00:02:56.016701 kernel: with environment: Sep 6 00:02:56.016723 kernel: HOME=/ Sep 6 00:02:56.016740 kernel: TERM=linux Sep 6 00:02:56.016762 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 00:02:56.016784 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:02:56.016806 systemd[1]: Detected virtualization amazon. Sep 6 00:02:56.016824 systemd[1]: Detected architecture arm64. Sep 6 00:02:56.016841 systemd[1]: Running in initrd. Sep 6 00:02:56.016859 systemd[1]: No hostname configured, using default hostname. Sep 6 00:02:56.016876 systemd[1]: Hostname set to . Sep 6 00:02:56.016899 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:02:56.016917 systemd[1]: Queued start job for default target initrd.target. Sep 6 00:02:56.016934 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:02:56.016952 systemd[1]: Reached target cryptsetup.target. Sep 6 00:02:56.016969 systemd[1]: Reached target paths.target. Sep 6 00:02:56.016987 systemd[1]: Reached target slices.target. Sep 6 00:02:56.017004 systemd[1]: Reached target swap.target. Sep 6 00:02:56.017022 systemd[1]: Reached target timers.target. Sep 6 00:02:56.017044 systemd[1]: Listening on iscsid.socket. Sep 6 00:02:56.017062 systemd[1]: Listening on iscsiuio.socket. Sep 6 00:02:56.017080 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 00:02:56.017098 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 00:02:56.017132 systemd[1]: Listening on systemd-journald.socket. Sep 6 00:02:56.017153 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:02:56.017171 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:02:56.017207 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:02:56.017227 systemd[1]: Reached target sockets.target. Sep 6 00:02:56.017250 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:02:56.017268 systemd[1]: Finished network-cleanup.service. Sep 6 00:02:56.017286 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 00:02:56.017303 systemd[1]: Starting systemd-journald.service... Sep 6 00:02:56.017321 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:02:56.017339 systemd[1]: Starting systemd-resolved.service... Sep 6 00:02:56.017357 systemd[1]: Starting systemd-vconsole-setup.service... Sep 6 00:02:56.017375 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:02:56.017397 kernel: audit: type=1130 audit(1757116975.981:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:56.017416 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 00:02:56.017434 systemd[1]: Finished systemd-vconsole-setup.service. Sep 6 00:02:56.017452 kernel: audit: type=1130 audit(1757116975.992:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:56.017469 systemd[1]: Starting dracut-cmdline-ask.service... Sep 6 00:02:56.017487 kernel: audit: type=1130 audit(1757116976.004:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:56.017508 systemd-journald[310]: Journal started Sep 6 00:02:56.017602 systemd-journald[310]: Runtime Journal (/run/log/journal/ec25dc9f86d9546b6d30633db8ae3708) is 8.0M, max 75.4M, 67.4M free. Sep 6 00:02:55.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:55.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:56.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:55.970254 systemd-modules-load[311]: Inserted module 'overlay' Sep 6 00:02:56.041281 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:02:56.041326 systemd[1]: Started systemd-journald.service. Sep 6 00:02:56.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:56.066220 kernel: audit: type=1130 audit(1757116976.042:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:56.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:56.073701 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:02:56.088672 kernel: audit: type=1130 audit(1757116976.074:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:56.088737 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 00:02:56.095689 systemd[1]: Finished dracut-cmdline-ask.service. Sep 6 00:02:56.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:56.111076 systemd-resolved[312]: Positive Trust Anchors: Sep 6 00:02:56.121018 kernel: audit: type=1130 audit(1757116976.096:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:56.121061 kernel: Bridge firewalling registered Sep 6 00:02:56.111372 systemd-resolved[312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:02:56.111425 systemd-resolved[312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:02:56.111823 systemd-modules-load[311]: Inserted module 'br_netfilter' Sep 6 00:02:56.115437 systemd[1]: Starting dracut-cmdline.service... Sep 6 00:02:56.160555 kernel: SCSI subsystem initialized Sep 6 00:02:56.166786 dracut-cmdline[327]: dracut-dracut-053 Sep 6 00:02:56.180343 dracut-cmdline[327]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5cb382ab59aa1336098b36da02e2d4491706a6fda80ee56c4ff8582cce9206a4 Sep 6 00:02:56.199669 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 00:02:56.199705 kernel: device-mapper: uevent: version 1.0.3 Sep 6 00:02:56.211836 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 6 00:02:56.211883 kernel: audit: type=1130 audit(1757116976.200:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:56.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:56.198281 systemd-modules-load[311]: Inserted module 'dm_multipath' Sep 6 00:02:56.199607 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:02:56.202793 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:02:56.234994 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:02:56.246469 kernel: audit: type=1130 audit(1757116976.233:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:56.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:56.346213 kernel: Loading iSCSI transport class v2.0-870. Sep 6 00:02:56.365227 kernel: iscsi: registered transport (tcp) Sep 6 00:02:56.392774 kernel: iscsi: registered transport (qla4xxx) Sep 6 00:02:56.392844 kernel: QLogic iSCSI HBA Driver Sep 6 00:02:56.603044 systemd-resolved[312]: Defaulting to hostname 'linux'. Sep 6 00:02:56.607005 kernel: random: crng init done Sep 6 00:02:56.618091 kernel: audit: type=1130 audit(1757116976.607:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:56.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:56.606672 systemd[1]: Started systemd-resolved.service. Sep 6 00:02:56.608841 systemd[1]: Reached target nss-lookup.target. Sep 6 00:02:56.638253 systemd[1]: Finished dracut-cmdline.service. Sep 6 00:02:56.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:56.643064 systemd[1]: Starting dracut-pre-udev.service... Sep 6 00:02:56.713233 kernel: raid6: neonx8 gen() 6187 MB/s Sep 6 00:02:56.731222 kernel: raid6: neonx8 xor() 4593 MB/s Sep 6 00:02:56.749215 kernel: raid6: neonx4 gen() 6591 MB/s Sep 6 00:02:56.767215 kernel: raid6: neonx4 xor() 4768 MB/s Sep 6 00:02:56.785215 kernel: raid6: neonx2 gen() 5812 MB/s Sep 6 00:02:56.803216 kernel: raid6: neonx2 xor() 4403 MB/s Sep 6 00:02:56.821215 kernel: raid6: neonx1 gen() 4513 MB/s Sep 6 00:02:56.839214 kernel: raid6: neonx1 xor() 3596 MB/s Sep 6 00:02:56.857215 kernel: raid6: int64x8 gen() 3452 MB/s Sep 6 00:02:56.875216 kernel: raid6: int64x8 xor() 2055 MB/s Sep 6 00:02:56.893215 kernel: raid6: int64x4 gen() 3865 MB/s Sep 6 00:02:56.911218 kernel: raid6: int64x4 xor() 2166 MB/s Sep 6 00:02:56.929222 kernel: raid6: int64x2 gen() 3625 MB/s Sep 6 00:02:56.947220 kernel: raid6: int64x2 xor() 1926 MB/s Sep 6 00:02:56.965214 kernel: raid6: int64x1 gen() 2764 MB/s Sep 6 00:02:56.984692 kernel: raid6: int64x1 xor() 1439 MB/s Sep 6 00:02:56.984722 kernel: raid6: using algorithm neonx4 gen() 6591 MB/s Sep 6 00:02:56.984746 kernel: raid6: .... xor() 4768 MB/s, rmw enabled Sep 6 00:02:56.986527 kernel: raid6: using neon recovery algorithm Sep 6 00:02:57.005229 kernel: xor: measuring software checksum speed Sep 6 00:02:57.007216 kernel: 8regs : 8529 MB/sec Sep 6 00:02:57.007257 kernel: 32regs : 10432 MB/sec Sep 6 00:02:57.010802 kernel: arm64_neon : 9559 MB/sec Sep 6 00:02:57.010832 kernel: xor: using function: 32regs (10432 MB/sec) Sep 6 00:02:57.108244 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Sep 6 00:02:57.125676 systemd[1]: Finished dracut-pre-udev.service. Sep 6 00:02:57.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:57.128000 audit: BPF prog-id=7 op=LOAD Sep 6 00:02:57.128000 audit: BPF prog-id=8 op=LOAD Sep 6 00:02:57.130162 systemd[1]: Starting systemd-udevd.service... Sep 6 00:02:57.166151 systemd-udevd[509]: Using default interface naming scheme 'v252'. Sep 6 00:02:57.179690 systemd[1]: Started systemd-udevd.service. Sep 6 00:02:57.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:57.183874 systemd[1]: Starting dracut-pre-trigger.service... Sep 6 00:02:57.215383 dracut-pre-trigger[510]: rd.md=0: removing MD RAID activation Sep 6 00:02:57.275429 systemd[1]: Finished dracut-pre-trigger.service. Sep 6 00:02:57.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:57.279367 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:02:57.380694 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:02:57.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:57.506422 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 6 00:02:57.506487 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 6 00:02:57.527169 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 6 00:02:57.527426 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 6 00:02:57.527637 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 6 00:02:57.527663 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 6 00:02:57.527895 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:d7:52:9c:f1:6b Sep 6 00:02:57.535213 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 6 00:02:57.543221 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 6 00:02:57.543270 kernel: GPT:9289727 != 16777215 Sep 6 00:02:57.545461 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 6 00:02:57.546765 kernel: GPT:9289727 != 16777215 Sep 6 00:02:57.548706 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 6 00:02:57.548741 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 6 00:02:57.554130 (udev-worker)[558]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:02:57.619228 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (562) Sep 6 00:02:57.673490 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 6 00:02:57.694464 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 6 00:02:57.705368 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 6 00:02:57.714533 systemd[1]: Starting disk-uuid.service... Sep 6 00:02:57.731092 disk-uuid[660]: Primary Header is updated. Sep 6 00:02:57.731092 disk-uuid[660]: Secondary Entries is updated. Sep 6 00:02:57.731092 disk-uuid[660]: Secondary Header is updated. Sep 6 00:02:57.747136 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:02:57.784288 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 6 00:02:58.765219 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 6 00:02:58.766166 disk-uuid[661]: The operation has completed successfully. Sep 6 00:02:58.931259 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 00:02:58.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:58.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:58.931466 systemd[1]: Finished disk-uuid.service. Sep 6 00:02:58.956427 systemd[1]: Starting verity-setup.service... Sep 6 00:02:58.996234 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 6 00:02:59.073344 systemd[1]: Found device dev-mapper-usr.device. Sep 6 00:02:59.078455 systemd[1]: Mounting sysusr-usr.mount... Sep 6 00:02:59.089739 systemd[1]: Finished verity-setup.service. Sep 6 00:02:59.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:59.175211 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 6 00:02:59.176791 systemd[1]: Mounted sysusr-usr.mount. Sep 6 00:02:59.180122 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 6 00:02:59.185513 systemd[1]: Starting ignition-setup.service... Sep 6 00:02:59.190389 systemd[1]: Starting parse-ip-for-networkd.service... Sep 6 00:02:59.223156 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:02:59.223248 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 6 00:02:59.223274 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 6 00:02:59.263225 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 6 00:02:59.280015 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 00:02:59.305570 systemd[1]: Finished ignition-setup.service. Sep 6 00:02:59.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:59.309078 systemd[1]: Starting ignition-fetch-offline.service... Sep 6 00:02:59.351109 systemd[1]: Finished parse-ip-for-networkd.service. Sep 6 00:02:59.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:59.354000 audit: BPF prog-id=9 op=LOAD Sep 6 00:02:59.356250 systemd[1]: Starting systemd-networkd.service... Sep 6 00:02:59.409019 systemd-networkd[1106]: lo: Link UP Sep 6 00:02:59.409041 systemd-networkd[1106]: lo: Gained carrier Sep 6 00:02:59.416574 systemd-networkd[1106]: Enumeration completed Sep 6 00:02:59.416873 systemd[1]: Started systemd-networkd.service. Sep 6 00:02:59.417900 systemd-networkd[1106]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:02:59.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:59.422640 systemd[1]: Reached target network.target. Sep 6 00:02:59.428196 systemd-networkd[1106]: eth0: Link UP Sep 6 00:02:59.428207 systemd-networkd[1106]: eth0: Gained carrier Sep 6 00:02:59.434805 systemd[1]: Starting iscsiuio.service... Sep 6 00:02:59.454071 systemd[1]: Started iscsiuio.service. Sep 6 00:02:59.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:59.465311 systemd-networkd[1106]: eth0: DHCPv4 address 172.31.24.61/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 6 00:02:59.479410 iscsid[1111]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:02:59.479410 iscsid[1111]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Sep 6 00:02:59.479410 iscsid[1111]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 6 00:02:59.479410 iscsid[1111]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 6 00:02:59.479410 iscsid[1111]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 6 00:02:59.479410 iscsid[1111]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:02:59.479410 iscsid[1111]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 6 00:02:59.465830 systemd[1]: Starting iscsid.service... Sep 6 00:02:59.508278 systemd[1]: Started iscsid.service. Sep 6 00:02:59.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:59.511277 systemd[1]: Starting dracut-initqueue.service... Sep 6 00:02:59.533250 systemd[1]: Finished dracut-initqueue.service. Sep 6 00:02:59.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:59.533574 systemd[1]: Reached target remote-fs-pre.target. Sep 6 00:02:59.534272 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:02:59.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:59.535064 systemd[1]: Reached target remote-fs.target. Sep 6 00:02:59.537233 systemd[1]: Starting dracut-pre-mount.service... Sep 6 00:02:59.559291 systemd[1]: Finished dracut-pre-mount.service. Sep 6 00:02:59.893176 ignition[1079]: Ignition 2.14.0 Sep 6 00:02:59.893272 ignition[1079]: Stage: fetch-offline Sep 6 00:02:59.893798 ignition[1079]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:02:59.893889 ignition[1079]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:02:59.918130 ignition[1079]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:02:59.919562 ignition[1079]: Ignition finished successfully Sep 6 00:02:59.926765 systemd[1]: Finished ignition-fetch-offline.service. Sep 6 00:02:59.935374 kernel: kauditd_printk_skb: 18 callbacks suppressed Sep 6 00:02:59.935413 kernel: audit: type=1130 audit(1757116979.927:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:59.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:02:59.930368 systemd[1]: Starting ignition-fetch.service... Sep 6 00:02:59.954261 ignition[1130]: Ignition 2.14.0 Sep 6 00:02:59.954290 ignition[1130]: Stage: fetch Sep 6 00:02:59.954617 ignition[1130]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:02:59.954676 ignition[1130]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:02:59.970647 ignition[1130]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:02:59.973281 ignition[1130]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:02:59.983716 ignition[1130]: INFO : PUT result: OK Sep 6 00:02:59.987469 ignition[1130]: DEBUG : parsed url from cmdline: "" Sep 6 00:02:59.987469 ignition[1130]: INFO : no config URL provided Sep 6 00:02:59.987469 ignition[1130]: INFO : reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:02:59.993786 ignition[1130]: INFO : no config at "/usr/lib/ignition/user.ign" Sep 6 00:02:59.993786 ignition[1130]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:02:59.993786 ignition[1130]: INFO : PUT result: OK Sep 6 00:02:59.993786 ignition[1130]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 6 00:03:00.006788 ignition[1130]: INFO : GET result: OK Sep 6 00:03:00.006788 ignition[1130]: DEBUG : parsing config with SHA512: 3a3d172f27a5093456d053466e27b770b63155b351b4c6af465df986232c913cffc9017c0e7593230388002a87986624cc48b9fb51d5d6d7dd792fe169ba04f6 Sep 6 00:03:00.016888 unknown[1130]: fetched base config from "system" Sep 6 00:03:00.016918 unknown[1130]: fetched base config from "system" Sep 6 00:03:00.016933 unknown[1130]: fetched user config from "aws" Sep 6 00:03:00.023007 ignition[1130]: fetch: fetch complete Sep 6 00:03:00.023033 ignition[1130]: fetch: fetch passed Sep 6 00:03:00.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:00.025613 systemd[1]: Finished ignition-fetch.service. Sep 6 00:03:00.023126 ignition[1130]: Ignition finished successfully Sep 6 00:03:00.044125 kernel: audit: type=1130 audit(1757116980.028:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:00.031812 systemd[1]: Starting ignition-kargs.service... Sep 6 00:03:00.059979 ignition[1136]: Ignition 2.14.0 Sep 6 00:03:00.060006 ignition[1136]: Stage: kargs Sep 6 00:03:00.060331 ignition[1136]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:03:00.060389 ignition[1136]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:03:00.076541 ignition[1136]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:03:00.079237 ignition[1136]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:03:00.082851 ignition[1136]: INFO : PUT result: OK Sep 6 00:03:00.088749 ignition[1136]: kargs: kargs passed Sep 6 00:03:00.088854 ignition[1136]: Ignition finished successfully Sep 6 00:03:00.105320 kernel: audit: type=1130 audit(1757116980.092:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:00.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:00.091805 systemd[1]: Finished ignition-kargs.service. Sep 6 00:03:00.095140 systemd[1]: Starting ignition-disks.service... Sep 6 00:03:00.116758 ignition[1142]: Ignition 2.14.0 Sep 6 00:03:00.116774 ignition[1142]: Stage: disks Sep 6 00:03:00.117066 ignition[1142]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:03:00.117145 ignition[1142]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:03:00.133431 ignition[1142]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:03:00.135902 ignition[1142]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:03:00.139200 ignition[1142]: INFO : PUT result: OK Sep 6 00:03:00.145296 ignition[1142]: disks: disks passed Sep 6 00:03:00.146985 ignition[1142]: Ignition finished successfully Sep 6 00:03:00.150140 systemd[1]: Finished ignition-disks.service. Sep 6 00:03:00.161294 kernel: audit: type=1130 audit(1757116980.150:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:00.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:00.152231 systemd[1]: Reached target initrd-root-device.target. Sep 6 00:03:00.162775 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:03:00.166320 systemd[1]: Reached target local-fs.target. Sep 6 00:03:00.169528 systemd[1]: Reached target sysinit.target. Sep 6 00:03:00.172562 systemd[1]: Reached target basic.target. Sep 6 00:03:00.178497 systemd[1]: Starting systemd-fsck-root.service... Sep 6 00:03:00.223281 systemd-fsck[1150]: ROOT: clean, 629/553520 files, 56027/553472 blocks Sep 6 00:03:00.230084 systemd[1]: Finished systemd-fsck-root.service. Sep 6 00:03:00.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:00.241484 systemd[1]: Mounting sysroot.mount... Sep 6 00:03:00.245206 kernel: audit: type=1130 audit(1757116980.230:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:00.262228 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 6 00:03:00.264079 systemd[1]: Mounted sysroot.mount. Sep 6 00:03:00.266865 systemd[1]: Reached target initrd-root-fs.target. Sep 6 00:03:00.276862 systemd[1]: Mounting sysroot-usr.mount... Sep 6 00:03:00.279322 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 6 00:03:00.279413 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 00:03:00.279478 systemd[1]: Reached target ignition-diskful.target. Sep 6 00:03:00.289502 systemd[1]: Mounted sysroot-usr.mount. Sep 6 00:03:00.315642 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 00:03:00.320999 systemd[1]: Starting initrd-setup-root.service... Sep 6 00:03:00.341842 initrd-setup-root[1172]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 00:03:00.355175 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1167) Sep 6 00:03:00.356454 initrd-setup-root[1180]: cut: /sysroot/etc/group: No such file or directory Sep 6 00:03:00.363996 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:03:00.364039 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 6 00:03:00.364063 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 6 00:03:00.371820 initrd-setup-root[1204]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 00:03:00.381237 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 6 00:03:00.383150 initrd-setup-root[1214]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 00:03:00.400871 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 00:03:00.453442 systemd-networkd[1106]: eth0: Gained IPv6LL Sep 6 00:03:00.582398 systemd[1]: Finished initrd-setup-root.service. Sep 6 00:03:00.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:00.592229 systemd[1]: Starting ignition-mount.service... Sep 6 00:03:00.602107 kernel: audit: type=1130 audit(1757116980.589:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:00.602774 systemd[1]: Starting sysroot-boot.service... Sep 6 00:03:00.615505 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Sep 6 00:03:00.615676 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Sep 6 00:03:00.638009 ignition[1232]: INFO : Ignition 2.14.0 Sep 6 00:03:00.638009 ignition[1232]: INFO : Stage: mount Sep 6 00:03:00.641675 ignition[1232]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:03:00.641675 ignition[1232]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:03:00.661482 systemd[1]: Finished sysroot-boot.service. Sep 6 00:03:00.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:00.666789 ignition[1232]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:03:00.666789 ignition[1232]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:03:00.677307 kernel: audit: type=1130 audit(1757116980.663:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:00.677449 ignition[1232]: INFO : PUT result: OK Sep 6 00:03:00.683023 ignition[1232]: INFO : mount: mount passed Sep 6 00:03:00.684997 ignition[1232]: INFO : Ignition finished successfully Sep 6 00:03:00.688098 systemd[1]: Finished ignition-mount.service. Sep 6 00:03:00.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:00.692700 systemd[1]: Starting ignition-files.service... Sep 6 00:03:00.702025 kernel: audit: type=1130 audit(1757116980.690:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:00.709479 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 00:03:00.733578 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by mount (1242) Sep 6 00:03:00.739227 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:03:00.739262 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 6 00:03:00.739286 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 6 00:03:00.755218 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 6 00:03:00.760159 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 00:03:00.778473 ignition[1261]: INFO : Ignition 2.14.0 Sep 6 00:03:00.778473 ignition[1261]: INFO : Stage: files Sep 6 00:03:00.782836 ignition[1261]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:03:00.782836 ignition[1261]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:03:00.796892 ignition[1261]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:03:00.799834 ignition[1261]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:03:00.802863 ignition[1261]: INFO : PUT result: OK Sep 6 00:03:00.808218 ignition[1261]: DEBUG : files: compiled without relabeling support, skipping Sep 6 00:03:00.814212 ignition[1261]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 00:03:00.817401 ignition[1261]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 00:03:00.837237 ignition[1261]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 00:03:00.840470 ignition[1261]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 00:03:00.843457 unknown[1261]: wrote ssh authorized keys file for user: core Sep 6 00:03:00.845769 ignition[1261]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 00:03:00.854825 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 6 00:03:00.859301 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 6 00:03:00.859301 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 6 00:03:00.859301 ignition[1261]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 6 00:03:03.085625 ignition[1261]: INFO : GET result: OK Sep 6 00:03:04.396451 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 6 00:03:04.400813 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:03:04.400813 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:03:04.400813 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 6 00:03:04.400813 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 6 00:03:04.400813 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Sep 6 00:03:04.400813 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:03:04.435954 ignition[1261]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3673934164" Sep 6 00:03:04.435954 ignition[1261]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3673934164": device or resource busy Sep 6 00:03:04.435954 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3673934164", trying btrfs: device or resource busy Sep 6 00:03:04.435954 ignition[1261]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3673934164" Sep 6 00:03:04.435954 ignition[1261]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3673934164" Sep 6 00:03:04.453574 ignition[1261]: INFO : op(3): [started] unmounting "/mnt/oem3673934164" Sep 6 00:03:04.453574 ignition[1261]: INFO : op(3): [finished] unmounting "/mnt/oem3673934164" Sep 6 00:03:04.453574 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Sep 6 00:03:04.465287 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:03:04.465287 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:03:04.465287 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:03:04.465287 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:03:04.465287 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/home/core/install.sh" Sep 6 00:03:04.465287 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 00:03:04.465287 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:03:04.465287 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:03:04.465287 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 6 00:03:04.465287 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:03:04.476406 systemd[1]: mnt-oem3673934164.mount: Deactivated successfully. Sep 6 00:03:04.518338 ignition[1261]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem861661851" Sep 6 00:03:04.518338 ignition[1261]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem861661851": device or resource busy Sep 6 00:03:04.518338 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem861661851", trying btrfs: device or resource busy Sep 6 00:03:04.518338 ignition[1261]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem861661851" Sep 6 00:03:04.518338 ignition[1261]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem861661851" Sep 6 00:03:04.534465 ignition[1261]: INFO : op(6): [started] unmounting "/mnt/oem861661851" Sep 6 00:03:04.534465 ignition[1261]: INFO : op(6): [finished] unmounting "/mnt/oem861661851" Sep 6 00:03:04.534465 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 6 00:03:04.534465 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 6 00:03:04.550046 ignition[1261]: INFO : GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 6 00:03:05.053342 ignition[1261]: INFO : GET result: OK Sep 6 00:03:05.563226 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 6 00:03:05.568595 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Sep 6 00:03:05.568595 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:03:05.583927 ignition[1261]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3845151902" Sep 6 00:03:05.587133 ignition[1261]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3845151902": device or resource busy Sep 6 00:03:05.587133 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3845151902", trying btrfs: device or resource busy Sep 6 00:03:05.587133 ignition[1261]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3845151902" Sep 6 00:03:05.598589 ignition[1261]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3845151902" Sep 6 00:03:05.598589 ignition[1261]: INFO : op(9): [started] unmounting "/mnt/oem3845151902" Sep 6 00:03:05.598589 ignition[1261]: INFO : op(9): [finished] unmounting "/mnt/oem3845151902" Sep 6 00:03:05.598589 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Sep 6 00:03:05.598589 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Sep 6 00:03:05.598589 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:03:05.624476 systemd[1]: mnt-oem3845151902.mount: Deactivated successfully. Sep 6 00:03:05.650152 ignition[1261]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2830329329" Sep 6 00:03:05.650152 ignition[1261]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2830329329": device or resource busy Sep 6 00:03:05.650152 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2830329329", trying btrfs: device or resource busy Sep 6 00:03:05.650152 ignition[1261]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2830329329" Sep 6 00:03:05.666320 ignition[1261]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2830329329" Sep 6 00:03:05.670309 ignition[1261]: INFO : op(c): [started] unmounting "/mnt/oem2830329329" Sep 6 00:03:05.673158 ignition[1261]: INFO : op(c): [finished] unmounting "/mnt/oem2830329329" Sep 6 00:03:05.677882 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Sep 6 00:03:05.677882 ignition[1261]: INFO : files: op(10): [started] processing unit "nvidia.service" Sep 6 00:03:05.677882 ignition[1261]: INFO : files: op(10): [finished] processing unit "nvidia.service" Sep 6 00:03:05.677882 ignition[1261]: INFO : files: op(11): [started] processing unit "coreos-metadata-sshkeys@.service" Sep 6 00:03:05.677882 ignition[1261]: INFO : files: op(11): [finished] processing unit "coreos-metadata-sshkeys@.service" Sep 6 00:03:05.677882 ignition[1261]: INFO : files: op(12): [started] processing unit "amazon-ssm-agent.service" Sep 6 00:03:05.677882 ignition[1261]: INFO : files: op(12): op(13): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Sep 6 00:03:05.677882 ignition[1261]: INFO : files: op(12): op(13): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Sep 6 00:03:05.677882 ignition[1261]: INFO : files: op(12): [finished] processing unit "amazon-ssm-agent.service" Sep 6 00:03:05.677882 ignition[1261]: INFO : files: op(14): [started] processing unit "containerd.service" Sep 6 00:03:05.677882 ignition[1261]: INFO : files: op(14): op(15): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 6 00:03:05.677882 ignition[1261]: INFO : files: op(14): op(15): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 6 00:03:05.677882 ignition[1261]: INFO : files: op(14): [finished] processing unit "containerd.service" Sep 6 00:03:05.677882 ignition[1261]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Sep 6 00:03:05.677882 ignition[1261]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:03:05.677882 ignition[1261]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:03:05.677882 ignition[1261]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Sep 6 00:03:05.677882 ignition[1261]: INFO : files: op(18): [started] setting preset to enabled for "nvidia.service" Sep 6 00:03:05.677882 ignition[1261]: INFO : files: op(18): [finished] setting preset to enabled for "nvidia.service" Sep 6 00:03:05.772825 ignition[1261]: INFO : files: op(19): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 00:03:05.772825 ignition[1261]: INFO : files: op(19): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 00:03:05.772825 ignition[1261]: INFO : files: op(1a): [started] setting preset to enabled for "amazon-ssm-agent.service" Sep 6 00:03:05.772825 ignition[1261]: INFO : files: op(1a): [finished] setting preset to enabled for "amazon-ssm-agent.service" Sep 6 00:03:05.772825 ignition[1261]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-helm.service" Sep 6 00:03:05.772825 ignition[1261]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-helm.service" Sep 6 00:03:05.772825 ignition[1261]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:03:05.772825 ignition[1261]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:03:05.772825 ignition[1261]: INFO : files: files passed Sep 6 00:03:05.772825 ignition[1261]: INFO : Ignition finished successfully Sep 6 00:03:05.812692 kernel: audit: type=1130 audit(1757116985.791:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.714177 systemd[1]: mnt-oem2830329329.mount: Deactivated successfully. Sep 6 00:03:05.790874 systemd[1]: Finished ignition-files.service. Sep 6 00:03:05.823286 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 6 00:03:05.832908 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 6 00:03:05.837594 systemd[1]: Starting ignition-quench.service... Sep 6 00:03:05.853857 initrd-setup-root-after-ignition[1286]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:03:05.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.857912 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 6 00:03:05.896556 kernel: audit: type=1130 audit(1757116985.867:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.896611 kernel: audit: type=1130 audit(1757116985.876:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.896638 kernel: audit: type=1131 audit(1757116985.877:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.873981 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 00:03:05.874201 systemd[1]: Finished ignition-quench.service. Sep 6 00:03:05.878420 systemd[1]: Reached target ignition-complete.target. Sep 6 00:03:05.898074 systemd[1]: Starting initrd-parse-etc.service... Sep 6 00:03:05.930896 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 00:03:05.933273 systemd[1]: Finished initrd-parse-etc.service. Sep 6 00:03:05.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.937134 systemd[1]: Reached target initrd-fs.target. Sep 6 00:03:05.951596 kernel: audit: type=1130 audit(1757116985.935:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.951632 kernel: audit: type=1131 audit(1757116985.935:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:05.957249 systemd[1]: Reached target initrd.target. Sep 6 00:03:05.961116 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 6 00:03:05.962592 systemd[1]: Starting dracut-pre-pivot.service... Sep 6 00:03:05.993847 systemd[1]: Finished dracut-pre-pivot.service. Sep 6 00:03:05.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.007372 systemd[1]: Starting initrd-cleanup.service... Sep 6 00:03:06.021229 kernel: audit: type=1130 audit(1757116985.996:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.027822 systemd[1]: Stopped target nss-lookup.target. Sep 6 00:03:06.111876 kernel: audit: type=1131 audit(1757116986.029:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.111927 kernel: audit: type=1131 audit(1757116986.046:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.111954 kernel: audit: type=1131 audit(1757116986.055:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.028145 systemd[1]: Stopped target remote-cryptsetup.target. Sep 6 00:03:06.124546 iscsid[1111]: iscsid shutting down. Sep 6 00:03:06.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.029021 systemd[1]: Stopped target timers.target. Sep 6 00:03:06.029804 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 00:03:06.030009 systemd[1]: Stopped dracut-pre-pivot.service. Sep 6 00:03:06.038816 systemd[1]: Stopped target initrd.target. Sep 6 00:03:06.039599 systemd[1]: Stopped target basic.target. Sep 6 00:03:06.040404 systemd[1]: Stopped target ignition-complete.target. Sep 6 00:03:06.041237 systemd[1]: Stopped target ignition-diskful.target. Sep 6 00:03:06.042022 systemd[1]: Stopped target initrd-root-device.target. Sep 6 00:03:06.042859 systemd[1]: Stopped target remote-fs.target. Sep 6 00:03:06.043656 systemd[1]: Stopped target remote-fs-pre.target. Sep 6 00:03:06.044465 systemd[1]: Stopped target sysinit.target. Sep 6 00:03:06.045297 systemd[1]: Stopped target local-fs.target. Sep 6 00:03:06.153072 ignition[1300]: INFO : Ignition 2.14.0 Sep 6 00:03:06.153072 ignition[1300]: INFO : Stage: umount Sep 6 00:03:06.153072 ignition[1300]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:03:06.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.046068 systemd[1]: Stopped target local-fs-pre.target. Sep 6 00:03:06.165698 ignition[1300]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:03:06.046930 systemd[1]: Stopped target swap.target. Sep 6 00:03:06.047249 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 00:03:06.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.047451 systemd[1]: Stopped dracut-pre-mount.service. Sep 6 00:03:06.055346 systemd[1]: Stopped target cryptsetup.target. Sep 6 00:03:06.056090 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 00:03:06.201816 ignition[1300]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:03:06.201816 ignition[1300]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:03:06.201816 ignition[1300]: INFO : PUT result: OK Sep 6 00:03:06.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.056306 systemd[1]: Stopped dracut-initqueue.service. Sep 6 00:03:06.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.057076 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 00:03:06.057306 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 6 00:03:06.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.233158 ignition[1300]: INFO : umount: umount passed Sep 6 00:03:06.233158 ignition[1300]: INFO : Ignition finished successfully Sep 6 00:03:06.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.057815 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 00:03:06.057993 systemd[1]: Stopped ignition-files.service. Sep 6 00:03:06.105310 systemd[1]: Stopping ignition-mount.service... Sep 6 00:03:06.112516 systemd[1]: Stopping iscsid.service... Sep 6 00:03:06.121689 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 00:03:06.121981 systemd[1]: Stopped kmod-static-nodes.service. Sep 6 00:03:06.137600 systemd[1]: Stopping sysroot-boot.service... Sep 6 00:03:06.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.154977 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 00:03:06.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.155328 systemd[1]: Stopped systemd-udev-trigger.service. Sep 6 00:03:06.159045 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 00:03:06.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.163135 systemd[1]: Stopped dracut-pre-trigger.service. Sep 6 00:03:06.191081 systemd[1]: iscsid.service: Deactivated successfully. Sep 6 00:03:06.191405 systemd[1]: Stopped iscsid.service. Sep 6 00:03:06.207288 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 00:03:06.208047 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 00:03:06.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.211047 systemd[1]: Finished initrd-cleanup.service. Sep 6 00:03:06.219768 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 00:03:06.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.219968 systemd[1]: Stopped sysroot-boot.service. Sep 6 00:03:06.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.225435 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 00:03:06.225613 systemd[1]: Stopped ignition-mount.service. Sep 6 00:03:06.228535 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 00:03:06.228634 systemd[1]: Stopped ignition-disks.service. Sep 6 00:03:06.231161 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 00:03:06.231277 systemd[1]: Stopped ignition-kargs.service. Sep 6 00:03:06.233093 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 6 00:03:06.233173 systemd[1]: Stopped ignition-fetch.service. Sep 6 00:03:06.239942 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 00:03:06.240068 systemd[1]: Stopped ignition-fetch-offline.service. Sep 6 00:03:06.242059 systemd[1]: Stopped target paths.target. Sep 6 00:03:06.243629 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 00:03:06.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.247253 systemd[1]: Stopped systemd-ask-password-console.path. Sep 6 00:03:06.247371 systemd[1]: Stopped target slices.target. Sep 6 00:03:06.250761 systemd[1]: Stopped target sockets.target. Sep 6 00:03:06.256201 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 00:03:06.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.357000 audit: BPF prog-id=6 op=UNLOAD Sep 6 00:03:06.256284 systemd[1]: Closed iscsid.socket. Sep 6 00:03:06.258877 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 00:03:06.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.258978 systemd[1]: Stopped ignition-setup.service. Sep 6 00:03:06.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.263164 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 00:03:06.263275 systemd[1]: Stopped initrd-setup-root.service. Sep 6 00:03:06.267329 systemd[1]: Stopping iscsiuio.service... Sep 6 00:03:06.272143 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 6 00:03:06.272364 systemd[1]: Stopped iscsiuio.service. Sep 6 00:03:06.273499 systemd[1]: Stopped target network.target. Sep 6 00:03:06.277835 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 00:03:06.277904 systemd[1]: Closed iscsiuio.socket. Sep 6 00:03:06.283596 systemd[1]: Stopping systemd-networkd.service... Sep 6 00:03:06.286713 systemd[1]: Stopping systemd-resolved.service... Sep 6 00:03:06.288550 systemd-networkd[1106]: eth0: DHCPv6 lease lost Sep 6 00:03:06.404000 audit: BPF prog-id=9 op=UNLOAD Sep 6 00:03:06.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.290880 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 00:03:06.291077 systemd[1]: Stopped systemd-networkd.service. Sep 6 00:03:06.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.295102 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 00:03:06.295172 systemd[1]: Closed systemd-networkd.socket. Sep 6 00:03:06.301084 systemd[1]: Stopping network-cleanup.service... Sep 6 00:03:06.305899 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 00:03:06.306026 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 6 00:03:06.308286 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:03:06.308368 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:03:06.311951 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 00:03:06.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:06.312065 systemd[1]: Stopped systemd-modules-load.service. Sep 6 00:03:06.315908 systemd[1]: Stopping systemd-udevd.service... Sep 6 00:03:06.339729 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 00:03:06.341629 systemd[1]: Stopped systemd-resolved.service. Sep 6 00:03:06.351338 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 00:03:06.354518 systemd[1]: Stopped systemd-udevd.service. Sep 6 00:03:06.357927 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 00:03:06.358019 systemd[1]: Closed systemd-udevd-control.socket. Sep 6 00:03:06.363427 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 00:03:06.363514 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 6 00:03:06.365405 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 00:03:06.366774 systemd[1]: Stopped dracut-pre-udev.service. Sep 6 00:03:06.370522 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 00:03:06.370612 systemd[1]: Stopped dracut-cmdline.service. Sep 6 00:03:06.372432 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 00:03:06.372516 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 6 00:03:06.388535 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 6 00:03:06.407027 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 00:03:06.407138 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 6 00:03:06.412416 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 00:03:06.416304 systemd[1]: Stopped network-cleanup.service. Sep 6 00:03:06.429092 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 00:03:06.431134 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 6 00:03:06.438592 systemd[1]: Reached target initrd-switch-root.target. Sep 6 00:03:06.459316 systemd[1]: Starting initrd-switch-root.service... Sep 6 00:03:06.500243 systemd[1]: Switching root. Sep 6 00:03:06.505000 audit: BPF prog-id=8 op=UNLOAD Sep 6 00:03:06.505000 audit: BPF prog-id=7 op=UNLOAD Sep 6 00:03:06.507000 audit: BPF prog-id=5 op=UNLOAD Sep 6 00:03:06.507000 audit: BPF prog-id=4 op=UNLOAD Sep 6 00:03:06.507000 audit: BPF prog-id=3 op=UNLOAD Sep 6 00:03:06.530393 systemd-journald[310]: Journal stopped Sep 6 00:03:11.554392 systemd-journald[310]: Received SIGTERM from PID 1 (systemd). Sep 6 00:03:11.554506 kernel: SELinux: Class mctp_socket not defined in policy. Sep 6 00:03:11.554556 kernel: SELinux: Class anon_inode not defined in policy. Sep 6 00:03:11.554588 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 6 00:03:11.554618 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 00:03:11.554658 kernel: SELinux: policy capability open_perms=1 Sep 6 00:03:11.554690 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 00:03:11.554728 kernel: SELinux: policy capability always_check_network=0 Sep 6 00:03:11.554758 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 00:03:11.554789 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 00:03:11.554818 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 00:03:11.554849 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 00:03:11.554885 systemd[1]: Successfully loaded SELinux policy in 80.610ms. Sep 6 00:03:11.554938 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.380ms. Sep 6 00:03:11.554974 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:03:11.555007 systemd[1]: Detected virtualization amazon. Sep 6 00:03:11.555040 systemd[1]: Detected architecture arm64. Sep 6 00:03:11.555072 systemd[1]: Detected first boot. Sep 6 00:03:11.555109 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:03:11.555142 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 6 00:03:11.555213 systemd[1]: Populated /etc with preset unit settings. Sep 6 00:03:11.555259 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:03:11.555295 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:03:11.555333 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:03:11.555366 systemd[1]: Queued start job for default target multi-user.target. Sep 6 00:03:11.555399 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. Sep 6 00:03:11.555436 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 6 00:03:11.555468 systemd[1]: Created slice system-addon\x2drun.slice. Sep 6 00:03:11.555502 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Sep 6 00:03:11.555534 systemd[1]: Created slice system-getty.slice. Sep 6 00:03:11.555567 systemd[1]: Created slice system-modprobe.slice. Sep 6 00:03:11.555598 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 6 00:03:11.555628 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 6 00:03:11.555660 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 6 00:03:11.555692 systemd[1]: Created slice user.slice. Sep 6 00:03:11.555727 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:03:11.555757 systemd[1]: Started systemd-ask-password-wall.path. Sep 6 00:03:11.555791 systemd[1]: Set up automount boot.automount. Sep 6 00:03:11.555822 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 6 00:03:11.555852 systemd[1]: Reached target integritysetup.target. Sep 6 00:03:11.555882 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:03:11.555914 systemd[1]: Reached target remote-fs.target. Sep 6 00:03:11.555947 systemd[1]: Reached target slices.target. Sep 6 00:03:11.555981 systemd[1]: Reached target swap.target. Sep 6 00:03:11.556014 systemd[1]: Reached target torcx.target. Sep 6 00:03:11.556045 systemd[1]: Reached target veritysetup.target. Sep 6 00:03:11.556076 systemd[1]: Listening on systemd-coredump.socket. Sep 6 00:03:11.556107 systemd[1]: Listening on systemd-initctl.socket. Sep 6 00:03:11.556138 kernel: kauditd_printk_skb: 48 callbacks suppressed Sep 6 00:03:11.556169 kernel: audit: type=1400 audit(1757116991.246:88): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:03:11.556221 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 00:03:11.556256 kernel: audit: type=1335 audit(1757116991.246:89): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 6 00:03:11.556291 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 00:03:11.556323 systemd[1]: Listening on systemd-journald.socket. Sep 6 00:03:11.556353 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:03:11.556382 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:03:11.556412 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:03:11.556442 systemd[1]: Listening on systemd-userdbd.socket. Sep 6 00:03:11.556474 systemd[1]: Mounting dev-hugepages.mount... Sep 6 00:03:11.556507 systemd[1]: Mounting dev-mqueue.mount... Sep 6 00:03:11.556540 systemd[1]: Mounting media.mount... Sep 6 00:03:11.556570 systemd[1]: Mounting sys-kernel-debug.mount... Sep 6 00:03:11.556603 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 6 00:03:11.556635 systemd[1]: Mounting tmp.mount... Sep 6 00:03:11.556667 systemd[1]: Starting flatcar-tmpfiles.service... Sep 6 00:03:11.556700 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:03:11.556730 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:03:11.556761 systemd[1]: Starting modprobe@configfs.service... Sep 6 00:03:11.556791 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:03:11.556821 systemd[1]: Starting modprobe@drm.service... Sep 6 00:03:11.556852 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:03:11.556888 systemd[1]: Starting modprobe@fuse.service... Sep 6 00:03:11.556919 systemd[1]: Starting modprobe@loop.service... Sep 6 00:03:11.556952 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:03:11.556985 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 6 00:03:11.557036 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 6 00:03:11.557066 systemd[1]: Starting systemd-journald.service... Sep 6 00:03:11.557098 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:03:11.557128 kernel: fuse: init (API version 7.34) Sep 6 00:03:11.557161 systemd[1]: Starting systemd-network-generator.service... Sep 6 00:03:11.557210 systemd[1]: Starting systemd-remount-fs.service... Sep 6 00:03:11.557243 kernel: loop: module loaded Sep 6 00:03:11.557273 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:03:11.557305 systemd[1]: Mounted dev-hugepages.mount. Sep 6 00:03:11.557335 kernel: audit: type=1305 audit(1757116991.550:90): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 00:03:11.557366 kernel: audit: type=1300 audit(1757116991.550:90): arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffeda3bff0 a2=4000 a3=1 items=0 ppid=1 pid=1449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:11.557401 systemd-journald[1449]: Journal started Sep 6 00:03:11.557492 systemd-journald[1449]: Runtime Journal (/run/log/journal/ec25dc9f86d9546b6d30633db8ae3708) is 8.0M, max 75.4M, 67.4M free. Sep 6 00:03:11.246000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:03:11.246000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 6 00:03:11.550000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 00:03:11.550000 audit[1449]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffeda3bff0 a2=4000 a3=1 items=0 ppid=1 pid=1449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:11.570304 systemd[1]: Started systemd-journald.service. Sep 6 00:03:11.550000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 6 00:03:11.579245 kernel: audit: type=1327 audit(1757116991.550:90): proctitle="/usr/lib/systemd/systemd-journald" Sep 6 00:03:11.579651 systemd[1]: Mounted dev-mqueue.mount. Sep 6 00:03:11.589662 kernel: audit: type=1130 audit(1757116991.576:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:11.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:11.590567 systemd[1]: Mounted media.mount. Sep 6 00:03:11.593604 systemd[1]: Mounted sys-kernel-debug.mount. Sep 6 00:03:11.595631 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 6 00:03:11.597746 systemd[1]: Mounted tmp.mount. Sep 6 00:03:11.601963 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:03:11.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:11.604608 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 00:03:11.604947 systemd[1]: Finished modprobe@configfs.service. Sep 6 00:03:11.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:11.616270 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:03:11.616620 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:03:11.623617 kernel: audit: type=1130 audit(1757116991.602:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:11.624124 kernel: audit: type=1130 audit(1757116991.614:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:11.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:11.635434 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:03:11.635793 systemd[1]: Finished modprobe@drm.service. Sep 6 00:03:11.638646 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:03:11.639003 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:03:11.641776 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 00:03:11.642135 systemd[1]: Finished modprobe@fuse.service. Sep 6 00:03:11.650282 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:03:11.650711 systemd[1]: Finished modprobe@loop.service. Sep 6 00:03:11.653395 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:03:11.656144 systemd[1]: Finished systemd-network-generator.service. Sep 6 00:03:11.658926 systemd[1]: Finished systemd-remount-fs.service. Sep 6 00:03:11.662531 systemd[1]: Reached target network-pre.target. Sep 6 00:03:11.667946 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 6 00:03:11.687735 kernel: audit: type=1131 audit(1757116991.614:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:11.687861 kernel: audit: type=1130 audit(1757116991.633:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:11.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:11.674081 systemd[1]: Mounting sys-kernel-config.mount... Sep 6 00:03:11.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:11.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:11.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:11.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:11.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:11.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:11.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:11.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:11.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:11.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:11.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:11.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:11.687910 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:03:11.692590 systemd[1]: Starting systemd-hwdb-update.service... Sep 6 00:03:11.698015 systemd[1]: Starting systemd-journal-flush.service... Sep 6 00:03:11.701825 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:03:11.705927 systemd[1]: Starting systemd-random-seed.service... Sep 6 00:03:11.707905 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:03:11.714679 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:03:11.721348 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 6 00:03:11.724893 systemd[1]: Mounted sys-kernel-config.mount. Sep 6 00:03:11.753556 systemd-journald[1449]: Time spent on flushing to /var/log/journal/ec25dc9f86d9546b6d30633db8ae3708 is 84.213ms for 1081 entries. Sep 6 00:03:11.753556 systemd-journald[1449]: System Journal (/var/log/journal/ec25dc9f86d9546b6d30633db8ae3708) is 8.0M, max 195.6M, 187.6M free. Sep 6 00:03:11.862141 systemd-journald[1449]: Received client request to flush runtime journal. Sep 6 00:03:11.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:11.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:11.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:11.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:11.753905 systemd[1]: Finished flatcar-tmpfiles.service. Sep 6 00:03:11.761163 systemd[1]: Starting systemd-sysusers.service... Sep 6 00:03:11.779267 systemd[1]: Finished systemd-random-seed.service. Sep 6 00:03:11.781446 systemd[1]: Reached target first-boot-complete.target. Sep 6 00:03:11.799793 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:03:11.833965 systemd[1]: Finished systemd-sysusers.service. Sep 6 00:03:11.839642 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:03:11.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:11.866006 systemd[1]: Finished systemd-journal-flush.service. Sep 6 00:03:11.907776 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:03:11.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:11.952801 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:03:11.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:11.957432 systemd[1]: Starting systemd-udev-settle.service... Sep 6 00:03:11.979223 udevadm[1507]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 6 00:03:12.617489 systemd[1]: Finished systemd-hwdb-update.service. Sep 6 00:03:12.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:12.621867 systemd[1]: Starting systemd-udevd.service... Sep 6 00:03:12.666742 systemd-udevd[1509]: Using default interface naming scheme 'v252'. Sep 6 00:03:12.714350 systemd[1]: Started systemd-udevd.service. Sep 6 00:03:12.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:12.719775 systemd[1]: Starting systemd-networkd.service... Sep 6 00:03:12.731871 systemd[1]: Starting systemd-userdbd.service... Sep 6 00:03:12.836753 systemd[1]: Found device dev-ttyS0.device. Sep 6 00:03:12.846689 systemd[1]: Started systemd-userdbd.service. Sep 6 00:03:12.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:12.893419 (udev-worker)[1519]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:03:13.034021 systemd-networkd[1511]: lo: Link UP Sep 6 00:03:13.034052 systemd-networkd[1511]: lo: Gained carrier Sep 6 00:03:13.035152 systemd-networkd[1511]: Enumeration completed Sep 6 00:03:13.035495 systemd[1]: Started systemd-networkd.service. Sep 6 00:03:13.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:13.040161 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 00:03:13.046743 systemd-networkd[1511]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:03:13.056230 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:03:13.056895 systemd-networkd[1511]: eth0: Link UP Sep 6 00:03:13.057419 systemd-networkd[1511]: eth0: Gained carrier Sep 6 00:03:13.072475 systemd-networkd[1511]: eth0: DHCPv4 address 172.31.24.61/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 6 00:03:13.254582 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:03:13.257742 systemd[1]: Finished systemd-udev-settle.service. Sep 6 00:03:13.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:13.281374 systemd[1]: Starting lvm2-activation-early.service... Sep 6 00:03:13.309034 lvm[1629]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:03:13.351411 systemd[1]: Finished lvm2-activation-early.service. Sep 6 00:03:13.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:13.354354 systemd[1]: Reached target cryptsetup.target. Sep 6 00:03:13.359707 systemd[1]: Starting lvm2-activation.service... Sep 6 00:03:13.371092 lvm[1631]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:03:13.409430 systemd[1]: Finished lvm2-activation.service. Sep 6 00:03:13.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:13.411700 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:03:13.413845 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 00:03:13.413925 systemd[1]: Reached target local-fs.target. Sep 6 00:03:13.415896 systemd[1]: Reached target machines.target. Sep 6 00:03:13.424478 systemd[1]: Starting ldconfig.service... Sep 6 00:03:13.428601 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:03:13.428726 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:03:13.431592 systemd[1]: Starting systemd-boot-update.service... Sep 6 00:03:13.436840 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 6 00:03:13.442595 systemd[1]: Starting systemd-machine-id-commit.service... Sep 6 00:03:13.453528 systemd[1]: Starting systemd-sysext.service... Sep 6 00:03:13.464508 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1634 (bootctl) Sep 6 00:03:13.466992 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 6 00:03:13.488244 systemd[1]: Unmounting usr-share-oem.mount... Sep 6 00:03:13.499959 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 6 00:03:13.500539 systemd[1]: Unmounted usr-share-oem.mount. Sep 6 00:03:13.536297 kernel: loop0: detected capacity change from 0 to 203944 Sep 6 00:03:13.538461 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 6 00:03:13.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:13.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:13.581817 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 00:03:13.583092 systemd[1]: Finished systemd-machine-id-commit.service. Sep 6 00:03:13.624283 systemd-fsck[1647]: fsck.fat 4.2 (2021-01-31) Sep 6 00:03:13.624283 systemd-fsck[1647]: /dev/nvme0n1p1: 236 files, 117310/258078 clusters Sep 6 00:03:13.628732 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 6 00:03:13.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:13.638865 systemd[1]: Mounting boot.mount... Sep 6 00:03:13.673998 systemd[1]: Mounted boot.mount. Sep 6 00:03:13.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:13.708904 systemd[1]: Finished systemd-boot-update.service. Sep 6 00:03:13.722486 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 00:03:13.743215 kernel: loop1: detected capacity change from 0 to 203944 Sep 6 00:03:13.760941 (sd-sysext)[1667]: Using extensions 'kubernetes'. Sep 6 00:03:13.763297 (sd-sysext)[1667]: Merged extensions into '/usr'. Sep 6 00:03:13.805090 systemd[1]: Mounting usr-share-oem.mount... Sep 6 00:03:13.807345 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:03:13.810211 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:03:13.821323 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:03:13.825929 systemd[1]: Starting modprobe@loop.service... Sep 6 00:03:13.827804 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:03:13.828156 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:03:13.846434 systemd[1]: Mounted usr-share-oem.mount. Sep 6 00:03:13.849310 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:03:13.849714 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:03:13.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:13.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:13.852778 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:03:13.853325 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:03:13.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:13.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:13.856556 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:03:13.856986 systemd[1]: Finished modprobe@loop.service. Sep 6 00:03:13.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:13.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:13.862592 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:03:13.862853 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:03:13.865363 systemd[1]: Finished systemd-sysext.service. Sep 6 00:03:13.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:13.872163 systemd[1]: Starting ensure-sysext.service... Sep 6 00:03:13.887094 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 6 00:03:13.905173 systemd[1]: Reloading. Sep 6 00:03:13.920750 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 6 00:03:13.924240 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 00:03:13.928303 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 00:03:14.030372 /usr/lib/systemd/system-generators/torcx-generator[1702]: time="2025-09-06T00:03:14Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:03:14.032268 /usr/lib/systemd/system-generators/torcx-generator[1702]: time="2025-09-06T00:03:14Z" level=info msg="torcx already run" Sep 6 00:03:14.280054 ldconfig[1633]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 00:03:14.349639 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:03:14.349689 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:03:14.391495 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:03:14.469383 systemd-networkd[1511]: eth0: Gained IPv6LL Sep 6 00:03:14.565992 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 00:03:14.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.568916 systemd[1]: Finished ldconfig.service. Sep 6 00:03:14.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.573125 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 6 00:03:14.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.580885 systemd[1]: Starting audit-rules.service... Sep 6 00:03:14.592393 systemd[1]: Starting clean-ca-certificates.service... Sep 6 00:03:14.597376 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 6 00:03:14.603461 systemd[1]: Starting systemd-resolved.service... Sep 6 00:03:14.611766 systemd[1]: Starting systemd-timesyncd.service... Sep 6 00:03:14.619127 systemd[1]: Starting systemd-update-utmp.service... Sep 6 00:03:14.638556 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:03:14.645173 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:03:14.650913 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:03:14.655706 systemd[1]: Starting modprobe@loop.service... Sep 6 00:03:14.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.659100 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:03:14.659461 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:03:14.662070 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:03:14.662490 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:03:14.671871 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:03:14.672297 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:03:14.676385 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:03:14.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.686294 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:03:14.688946 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:03:14.699916 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:03:14.704437 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:03:14.704737 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:03:14.706671 systemd[1]: Finished clean-ca-certificates.service. Sep 6 00:03:14.713386 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:03:14.713744 systemd[1]: Finished modprobe@loop.service. Sep 6 00:03:14.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.716935 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:03:14.717389 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:03:14.732318 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:03:14.734955 systemd[1]: Starting modprobe@drm.service... Sep 6 00:03:14.744118 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:03:14.751000 audit[1776]: SYSTEM_BOOT pid=1776 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.757397 systemd[1]: Starting modprobe@loop.service... Sep 6 00:03:14.759339 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:03:14.759664 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:03:14.760006 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:03:14.764335 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 6 00:03:14.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.775264 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:03:14.775645 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:03:14.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.778611 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:03:14.778991 systemd[1]: Finished modprobe@drm.service. Sep 6 00:03:14.792817 systemd[1]: Starting systemd-update-done.service... Sep 6 00:03:14.796446 systemd[1]: Finished ensure-sysext.service. Sep 6 00:03:14.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.803943 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:03:14.805961 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:03:14.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.811888 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:03:14.812762 systemd[1]: Finished modprobe@loop.service. Sep 6 00:03:14.816976 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:03:14.822454 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:03:14.835694 systemd[1]: Finished systemd-update-utmp.service. Sep 6 00:03:14.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.850058 systemd[1]: Finished systemd-update-done.service. Sep 6 00:03:14.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:14.895000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 6 00:03:14.895000 audit[1811]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffe6caba0 a2=420 a3=0 items=0 ppid=1768 pid=1811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:14.895000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 6 00:03:14.896743 augenrules[1811]: No rules Sep 6 00:03:14.898661 systemd[1]: Finished audit-rules.service. Sep 6 00:03:14.963487 systemd-resolved[1772]: Positive Trust Anchors: Sep 6 00:03:14.963520 systemd-resolved[1772]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:03:14.963574 systemd-resolved[1772]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:03:14.979644 systemd[1]: Started systemd-timesyncd.service. Sep 6 00:03:14.981976 systemd[1]: Reached target time-set.target. Sep 6 00:03:14.997768 systemd-resolved[1772]: Defaulting to hostname 'linux'. Sep 6 00:03:15.001548 systemd[1]: Started systemd-resolved.service. Sep 6 00:03:15.003600 systemd[1]: Reached target network.target. Sep 6 00:03:15.005557 systemd[1]: Reached target network-online.target. Sep 6 00:03:15.007502 systemd[1]: Reached target nss-lookup.target. Sep 6 00:03:15.009376 systemd[1]: Reached target sysinit.target. Sep 6 00:03:15.011363 systemd[1]: Started motdgen.path. Sep 6 00:03:15.013021 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 6 00:03:15.015775 systemd[1]: Started logrotate.timer. Sep 6 00:03:15.017670 systemd[1]: Started mdadm.timer. Sep 6 00:03:15.019249 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 6 00:03:15.021113 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 00:03:15.021165 systemd[1]: Reached target paths.target. Sep 6 00:03:15.022855 systemd[1]: Reached target timers.target. Sep 6 00:03:15.025090 systemd[1]: Listening on dbus.socket. Sep 6 00:03:15.029259 systemd[1]: Starting docker.socket... Sep 6 00:03:15.033754 systemd[1]: Listening on sshd.socket. Sep 6 00:03:15.035856 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:03:15.036675 systemd[1]: Listening on docker.socket. Sep 6 00:03:15.038654 systemd[1]: Reached target sockets.target. Sep 6 00:03:15.040727 systemd[1]: Reached target basic.target. Sep 6 00:03:15.042989 systemd[1]: System is tainted: cgroupsv1 Sep 6 00:03:15.043349 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:03:15.043590 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:03:15.046486 systemd[1]: Started amazon-ssm-agent.service. Sep 6 00:03:15.051333 systemd[1]: Starting containerd.service... Sep 6 00:03:15.057432 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Sep 6 00:03:15.062644 systemd[1]: Starting dbus.service... Sep 6 00:03:15.069948 systemd[1]: Starting enable-oem-cloudinit.service... Sep 6 00:03:15.073052 systemd-timesyncd[1773]: Contacted time server 135.148.100.14:123 (0.flatcar.pool.ntp.org). Sep 6 00:03:15.113854 jq[1825]: false Sep 6 00:03:15.073230 systemd-timesyncd[1773]: Initial clock synchronization to Sat 2025-09-06 00:03:15.393309 UTC. Sep 6 00:03:15.079170 systemd[1]: Starting extend-filesystems.service... Sep 6 00:03:15.081426 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 6 00:03:15.084912 systemd[1]: Starting kubelet.service... Sep 6 00:03:15.089882 systemd[1]: Starting motdgen.service... Sep 6 00:03:15.103040 systemd[1]: Started nvidia.service. Sep 6 00:03:15.114597 systemd[1]: Starting prepare-helm.service... Sep 6 00:03:15.132463 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 6 00:03:15.137441 systemd[1]: Starting sshd-keygen.service... Sep 6 00:03:15.152235 systemd[1]: Starting systemd-logind.service... Sep 6 00:03:15.154100 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:03:15.154307 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 6 00:03:15.263934 jq[1840]: true Sep 6 00:03:15.164368 systemd[1]: Starting update-engine.service... Sep 6 00:03:15.176676 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 6 00:03:15.194383 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 00:03:15.194996 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 6 00:03:15.266982 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 00:03:15.267601 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 6 00:03:15.325630 tar[1844]: linux-arm64/helm Sep 6 00:03:15.332676 jq[1846]: true Sep 6 00:03:15.386849 extend-filesystems[1826]: Found loop1 Sep 6 00:03:15.386849 extend-filesystems[1826]: Found nvme0n1 Sep 6 00:03:15.404374 extend-filesystems[1826]: Found nvme0n1p1 Sep 6 00:03:15.404374 extend-filesystems[1826]: Found nvme0n1p7 Sep 6 00:03:15.404374 extend-filesystems[1826]: Found nvme0n1p9 Sep 6 00:03:15.404374 extend-filesystems[1826]: Checking size of /dev/nvme0n1p9 Sep 6 00:03:15.430716 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 00:03:15.431323 systemd[1]: Finished motdgen.service. Sep 6 00:03:15.506810 dbus-daemon[1824]: [system] SELinux support is enabled Sep 6 00:03:15.507178 systemd[1]: Started dbus.service. Sep 6 00:03:15.512681 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 00:03:15.512746 systemd[1]: Reached target system-config.target. Sep 6 00:03:15.517312 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 00:03:15.517351 systemd[1]: Reached target user-config.target. Sep 6 00:03:15.532491 extend-filesystems[1826]: Resized partition /dev/nvme0n1p9 Sep 6 00:03:15.536594 dbus-daemon[1824]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1511 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 6 00:03:15.543134 systemd[1]: Starting systemd-hostnamed.service... Sep 6 00:03:15.555150 extend-filesystems[1894]: resize2fs 1.46.5 (30-Dec-2021) Sep 6 00:03:15.583249 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 6 00:03:15.677671 update_engine[1838]: I0906 00:03:15.666311 1838 main.cc:92] Flatcar Update Engine starting Sep 6 00:03:15.692239 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 6 00:03:15.703163 bash[1893]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:03:15.693933 systemd[1]: Started update-engine.service. Sep 6 00:03:15.725147 update_engine[1838]: I0906 00:03:15.709628 1838 update_check_scheduler.cc:74] Next update check in 4m14s Sep 6 00:03:15.700802 systemd[1]: Started locksmithd.service. Sep 6 00:03:15.705222 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 6 00:03:15.728512 extend-filesystems[1894]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 6 00:03:15.728512 extend-filesystems[1894]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 6 00:03:15.728512 extend-filesystems[1894]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 6 00:03:15.766803 extend-filesystems[1826]: Resized filesystem in /dev/nvme0n1p9 Sep 6 00:03:15.766803 extend-filesystems[1826]: Found nvme0n1p2 Sep 6 00:03:15.766803 extend-filesystems[1826]: Found nvme0n1p3 Sep 6 00:03:15.766803 extend-filesystems[1826]: Found usr Sep 6 00:03:15.766803 extend-filesystems[1826]: Found nvme0n1p4 Sep 6 00:03:15.766803 extend-filesystems[1826]: Found nvme0n1p6 Sep 6 00:03:15.741846 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 00:03:15.742471 systemd[1]: Finished extend-filesystems.service. Sep 6 00:03:15.860772 amazon-ssm-agent[1820]: 2025/09/06 00:03:15 Failed to load instance info from vault. RegistrationKey does not exist. Sep 6 00:03:15.894810 amazon-ssm-agent[1820]: Initializing new seelog logger Sep 6 00:03:15.895057 amazon-ssm-agent[1820]: New Seelog Logger Creation Complete Sep 6 00:03:15.895162 amazon-ssm-agent[1820]: 2025/09/06 00:03:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 6 00:03:15.895162 amazon-ssm-agent[1820]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 6 00:03:15.908885 amazon-ssm-agent[1820]: 2025/09/06 00:03:15 processing appconfig overrides Sep 6 00:03:15.937011 systemd[1]: nvidia.service: Deactivated successfully. Sep 6 00:03:15.970629 env[1848]: time="2025-09-06T00:03:15.970490801Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 6 00:03:16.009690 systemd-logind[1837]: Watching system buttons on /dev/input/event0 (Power Button) Sep 6 00:03:16.009760 systemd-logind[1837]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 6 00:03:16.029757 systemd-logind[1837]: New seat seat0. Sep 6 00:03:16.033169 systemd[1]: Started systemd-logind.service. Sep 6 00:03:16.205243 dbus-daemon[1824]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 6 00:03:16.205523 systemd[1]: Started systemd-hostnamed.service. Sep 6 00:03:16.211599 dbus-daemon[1824]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1895 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 6 00:03:16.217248 systemd[1]: Starting polkit.service... Sep 6 00:03:16.247912 env[1848]: time="2025-09-06T00:03:16.247367286Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 00:03:16.247912 env[1848]: time="2025-09-06T00:03:16.247648888Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:03:16.265273 env[1848]: time="2025-09-06T00:03:16.263560118Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.190-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:03:16.265273 env[1848]: time="2025-09-06T00:03:16.263630696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:03:16.265273 env[1848]: time="2025-09-06T00:03:16.264125206Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:03:16.265273 env[1848]: time="2025-09-06T00:03:16.264163447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 00:03:16.265273 env[1848]: time="2025-09-06T00:03:16.264194960Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 6 00:03:16.265273 env[1848]: time="2025-09-06T00:03:16.264245357Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 00:03:16.265273 env[1848]: time="2025-09-06T00:03:16.264420412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:03:16.265273 env[1848]: time="2025-09-06T00:03:16.265013432Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:03:16.265956 env[1848]: time="2025-09-06T00:03:16.265910656Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:03:16.266095 polkitd[1938]: Started polkitd version 121 Sep 6 00:03:16.269354 env[1848]: time="2025-09-06T00:03:16.269285658Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 00:03:16.269698 env[1848]: time="2025-09-06T00:03:16.269662525Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 6 00:03:16.271082 env[1848]: time="2025-09-06T00:03:16.271045086Z" level=info msg="metadata content store policy set" policy=shared Sep 6 00:03:16.293175 env[1848]: time="2025-09-06T00:03:16.293116334Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 00:03:16.293427 env[1848]: time="2025-09-06T00:03:16.293390710Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 00:03:16.293568 env[1848]: time="2025-09-06T00:03:16.293535024Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 00:03:16.293795 env[1848]: time="2025-09-06T00:03:16.293758566Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 00:03:16.293934 env[1848]: time="2025-09-06T00:03:16.293902842Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 00:03:16.294070 env[1848]: time="2025-09-06T00:03:16.294037384Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 00:03:16.294212 env[1848]: time="2025-09-06T00:03:16.294180650Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 00:03:16.294890 env[1848]: time="2025-09-06T00:03:16.294839805Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 00:03:16.295080 env[1848]: time="2025-09-06T00:03:16.295047359Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 6 00:03:16.295259 env[1848]: time="2025-09-06T00:03:16.295199361Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 00:03:16.295406 env[1848]: time="2025-09-06T00:03:16.295374428Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 00:03:16.295531 env[1848]: time="2025-09-06T00:03:16.295500370Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 00:03:16.295861 env[1848]: time="2025-09-06T00:03:16.295827327Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 00:03:16.296162 env[1848]: time="2025-09-06T00:03:16.296129959Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 00:03:16.307145 env[1848]: time="2025-09-06T00:03:16.307091934Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 00:03:16.307442 env[1848]: time="2025-09-06T00:03:16.307390510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 00:03:16.308053 env[1848]: time="2025-09-06T00:03:16.308010189Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 00:03:16.312585 env[1848]: time="2025-09-06T00:03:16.312528447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 00:03:16.315620 env[1848]: time="2025-09-06T00:03:16.315564599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 00:03:16.319694 env[1848]: time="2025-09-06T00:03:16.319639357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 00:03:16.319893 env[1848]: time="2025-09-06T00:03:16.319861501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 00:03:16.320074 env[1848]: time="2025-09-06T00:03:16.320042321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 00:03:16.320367 polkitd[1938]: Loading rules from directory /etc/polkit-1/rules.d Sep 6 00:03:16.320501 polkitd[1938]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 6 00:03:16.322303 env[1848]: time="2025-09-06T00:03:16.322247359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 00:03:16.322503 env[1848]: time="2025-09-06T00:03:16.322470177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 00:03:16.322633 env[1848]: time="2025-09-06T00:03:16.322601848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 00:03:16.322968 env[1848]: time="2025-09-06T00:03:16.322931987Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 00:03:16.323474 env[1848]: time="2025-09-06T00:03:16.323438528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 00:03:16.323788 polkitd[1938]: Finished loading, compiling and executing 2 rules Sep 6 00:03:16.324082 env[1848]: time="2025-09-06T00:03:16.324039523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 00:03:16.324343 env[1848]: time="2025-09-06T00:03:16.324295602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 00:03:16.324680 dbus-daemon[1824]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 6 00:03:16.324994 systemd[1]: Started polkit.service. Sep 6 00:03:16.325538 polkitd[1938]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 6 00:03:16.330491 env[1848]: time="2025-09-06T00:03:16.330440169Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 00:03:16.331280 env[1848]: time="2025-09-06T00:03:16.331179538Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 6 00:03:16.331447 env[1848]: time="2025-09-06T00:03:16.331411304Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 00:03:16.331606 env[1848]: time="2025-09-06T00:03:16.331569784Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 6 00:03:16.331830 env[1848]: time="2025-09-06T00:03:16.331794112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 00:03:16.336071 env[1848]: time="2025-09-06T00:03:16.335935254Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 00:03:16.342389 env[1848]: time="2025-09-06T00:03:16.342339158Z" level=info msg="Connect containerd service" Sep 6 00:03:16.342623 env[1848]: time="2025-09-06T00:03:16.342588210Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 00:03:16.347558 env[1848]: time="2025-09-06T00:03:16.347500022Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:03:16.353390 env[1848]: time="2025-09-06T00:03:16.353274949Z" level=info msg="Start subscribing containerd event" Sep 6 00:03:16.353390 env[1848]: time="2025-09-06T00:03:16.353369028Z" level=info msg="Start recovering state" Sep 6 00:03:16.353577 env[1848]: time="2025-09-06T00:03:16.353487208Z" level=info msg="Start event monitor" Sep 6 00:03:16.353577 env[1848]: time="2025-09-06T00:03:16.353526846Z" level=info msg="Start snapshots syncer" Sep 6 00:03:16.353577 env[1848]: time="2025-09-06T00:03:16.353557287Z" level=info msg="Start cni network conf syncer for default" Sep 6 00:03:16.353764 env[1848]: time="2025-09-06T00:03:16.353578654Z" level=info msg="Start streaming server" Sep 6 00:03:16.359489 env[1848]: time="2025-09-06T00:03:16.359435853Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 00:03:16.360499 env[1848]: time="2025-09-06T00:03:16.360456586Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 00:03:16.363318 systemd[1]: Started containerd.service. Sep 6 00:03:16.376559 systemd-resolved[1772]: System hostname changed to 'ip-172-31-24-61'. Sep 6 00:03:16.376561 systemd-hostnamed[1895]: Hostname set to (transient) Sep 6 00:03:16.383390 env[1848]: time="2025-09-06T00:03:16.383308801Z" level=info msg="containerd successfully booted in 0.419955s" Sep 6 00:03:16.493981 coreos-metadata[1823]: Sep 06 00:03:16.493 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 6 00:03:16.496968 coreos-metadata[1823]: Sep 06 00:03:16.496 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Sep 6 00:03:16.500351 coreos-metadata[1823]: Sep 06 00:03:16.500 INFO Fetch successful Sep 6 00:03:16.500351 coreos-metadata[1823]: Sep 06 00:03:16.500 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 6 00:03:16.506911 coreos-metadata[1823]: Sep 06 00:03:16.506 INFO Fetch successful Sep 6 00:03:16.512417 unknown[1823]: wrote ssh authorized keys file for user: core Sep 6 00:03:16.540646 update-ssh-keys[1977]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:03:16.541595 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Sep 6 00:03:16.930733 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO Create new startup processor Sep 6 00:03:16.933472 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [LongRunningPluginsManager] registered plugins: {} Sep 6 00:03:16.934390 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO Initializing bookkeeping folders Sep 6 00:03:16.934595 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO removing the completed state files Sep 6 00:03:16.934734 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO Initializing bookkeeping folders for long running plugins Sep 6 00:03:16.934858 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Sep 6 00:03:16.934989 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO Initializing healthcheck folders for long running plugins Sep 6 00:03:16.935127 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO Initializing locations for inventory plugin Sep 6 00:03:16.937075 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO Initializing default location for custom inventory Sep 6 00:03:16.945405 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO Initializing default location for file inventory Sep 6 00:03:16.946910 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO Initializing default location for role inventory Sep 6 00:03:16.947138 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO Init the cloudwatchlogs publisher Sep 6 00:03:16.949435 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [instanceID=i-077ef6c9388618579] Successfully loaded platform independent plugin aws:runPowerShellScript Sep 6 00:03:16.964400 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [instanceID=i-077ef6c9388618579] Successfully loaded platform independent plugin aws:configureDocker Sep 6 00:03:16.964400 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [instanceID=i-077ef6c9388618579] Successfully loaded platform independent plugin aws:runDockerAction Sep 6 00:03:16.964614 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [instanceID=i-077ef6c9388618579] Successfully loaded platform independent plugin aws:runDocument Sep 6 00:03:16.964614 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [instanceID=i-077ef6c9388618579] Successfully loaded platform independent plugin aws:softwareInventory Sep 6 00:03:16.964614 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [instanceID=i-077ef6c9388618579] Successfully loaded platform independent plugin aws:updateSsmAgent Sep 6 00:03:16.964614 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [instanceID=i-077ef6c9388618579] Successfully loaded platform independent plugin aws:refreshAssociation Sep 6 00:03:16.964864 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [instanceID=i-077ef6c9388618579] Successfully loaded platform independent plugin aws:configurePackage Sep 6 00:03:16.964864 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [instanceID=i-077ef6c9388618579] Successfully loaded platform independent plugin aws:downloadContent Sep 6 00:03:16.964864 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [instanceID=i-077ef6c9388618579] Successfully loaded platform dependent plugin aws:runShellScript Sep 6 00:03:16.964864 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Sep 6 00:03:16.964864 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO OS: linux, Arch: arm64 Sep 6 00:03:16.975069 amazon-ssm-agent[1820]: datastore file /var/lib/amazon/ssm/i-077ef6c9388618579/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Sep 6 00:03:17.032376 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [MessageGatewayService] Starting session document processing engine... Sep 6 00:03:17.127377 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [MessageGatewayService] [EngineProcessor] Starting Sep 6 00:03:17.221803 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Sep 6 00:03:17.316324 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-077ef6c9388618579, requestId: 1cba4b01-a57a-4a1b-bac9-cf81ff1bd261 Sep 6 00:03:17.412353 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [MessagingDeliveryService] Starting document processing engine... Sep 6 00:03:17.506327 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [MessagingDeliveryService] [EngineProcessor] Starting Sep 6 00:03:17.578136 tar[1844]: linux-arm64/LICENSE Sep 6 00:03:17.578789 tar[1844]: linux-arm64/README.md Sep 6 00:03:17.595833 systemd[1]: Finished prepare-helm.service. Sep 6 00:03:17.601680 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Sep 6 00:03:17.696685 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [MessagingDeliveryService] Starting message polling Sep 6 00:03:17.780435 locksmithd[1910]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 00:03:17.792146 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [MessagingDeliveryService] Starting send replies to MDS Sep 6 00:03:17.887854 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [instanceID=i-077ef6c9388618579] Starting association polling Sep 6 00:03:17.983763 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Sep 6 00:03:18.079946 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [MessagingDeliveryService] [Association] Launching response handler Sep 6 00:03:18.177276 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Sep 6 00:03:18.187958 systemd[1]: Started kubelet.service. Sep 6 00:03:18.272750 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Sep 6 00:03:18.369511 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Sep 6 00:03:18.466404 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [MessageGatewayService] listening reply. Sep 6 00:03:18.563508 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [HealthCheck] HealthCheck reporting agent health. Sep 6 00:03:18.660873 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [OfflineService] Starting document processing engine... Sep 6 00:03:18.758333 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [OfflineService] [EngineProcessor] Starting Sep 6 00:03:18.856054 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [OfflineService] [EngineProcessor] Initial processing Sep 6 00:03:18.953991 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [OfflineService] Starting message polling Sep 6 00:03:19.052107 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [OfflineService] Starting send replies to MDS Sep 6 00:03:19.150360 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [LongRunningPluginsManager] starting long running plugin manager Sep 6 00:03:19.178482 sshd_keygen[1870]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 00:03:19.195732 kubelet[2056]: E0906 00:03:19.195641 2056 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:03:19.198990 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:03:19.199454 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:03:19.231203 systemd[1]: Finished sshd-keygen.service. Sep 6 00:03:19.241158 systemd[1]: Starting issuegen.service... Sep 6 00:03:19.248881 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Sep 6 00:03:19.255465 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 00:03:19.256027 systemd[1]: Finished issuegen.service. Sep 6 00:03:19.265924 systemd[1]: Starting systemd-user-sessions.service... Sep 6 00:03:19.281854 systemd[1]: Finished systemd-user-sessions.service. Sep 6 00:03:19.289938 systemd[1]: Started getty@tty1.service. Sep 6 00:03:19.297079 systemd[1]: Started serial-getty@ttyS0.service. Sep 6 00:03:19.299913 systemd[1]: Reached target getty.target. Sep 6 00:03:19.307646 systemd[1]: Reached target multi-user.target. Sep 6 00:03:19.313394 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 6 00:03:19.330993 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 6 00:03:19.331839 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 6 00:03:19.335477 systemd[1]: Startup finished in 12.813s (kernel) + 12.115s (userspace) = 24.929s. Sep 6 00:03:19.347595 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Sep 6 00:03:19.448671 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [StartupProcessor] Executing startup processor tasks Sep 6 00:03:19.550351 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Sep 6 00:03:19.651976 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Sep 6 00:03:19.753738 amazon-ssm-agent[1820]: 2025-09-06 00:03:16 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.8 Sep 6 00:03:19.854599 amazon-ssm-agent[1820]: 2025-09-06 00:03:17 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-077ef6c9388618579?role=subscribe&stream=input Sep 6 00:03:19.954409 amazon-ssm-agent[1820]: 2025-09-06 00:03:17 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-077ef6c9388618579?role=subscribe&stream=input Sep 6 00:03:20.056773 amazon-ssm-agent[1820]: 2025-09-06 00:03:17 INFO [MessageGatewayService] Starting receiving message from control channel Sep 6 00:03:20.158156 amazon-ssm-agent[1820]: 2025-09-06 00:03:17 INFO [MessageGatewayService] [EngineProcessor] Initial processing Sep 6 00:03:23.454535 systemd[1]: Created slice system-sshd.slice. Sep 6 00:03:23.457044 systemd[1]: Started sshd@0-172.31.24.61:22-147.75.109.163:52462.service. Sep 6 00:03:23.671426 sshd[2081]: Accepted publickey for core from 147.75.109.163 port 52462 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:03:23.676901 sshd[2081]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:03:23.699332 systemd[1]: Created slice user-500.slice. Sep 6 00:03:23.701694 systemd[1]: Starting user-runtime-dir@500.service... Sep 6 00:03:23.714398 systemd-logind[1837]: New session 1 of user core. Sep 6 00:03:23.723715 systemd[1]: Finished user-runtime-dir@500.service. Sep 6 00:03:23.728605 systemd[1]: Starting user@500.service... Sep 6 00:03:23.739535 (systemd)[2086]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:03:23.935555 systemd[2086]: Queued start job for default target default.target. Sep 6 00:03:23.936653 systemd[2086]: Reached target paths.target. Sep 6 00:03:23.936866 systemd[2086]: Reached target sockets.target. Sep 6 00:03:23.937337 systemd[2086]: Reached target timers.target. Sep 6 00:03:23.937573 systemd[2086]: Reached target basic.target. Sep 6 00:03:23.937897 systemd[1]: Started user@500.service. Sep 6 00:03:23.939454 systemd[2086]: Reached target default.target. Sep 6 00:03:23.939543 systemd[2086]: Startup finished in 186ms. Sep 6 00:03:23.939872 systemd[1]: Started session-1.scope. Sep 6 00:03:24.092243 systemd[1]: Started sshd@1-172.31.24.61:22-147.75.109.163:52468.service. Sep 6 00:03:24.270558 sshd[2095]: Accepted publickey for core from 147.75.109.163 port 52468 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:03:24.273932 sshd[2095]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:03:24.281966 systemd-logind[1837]: New session 2 of user core. Sep 6 00:03:24.284410 systemd[1]: Started session-2.scope. Sep 6 00:03:24.423734 sshd[2095]: pam_unix(sshd:session): session closed for user core Sep 6 00:03:24.429791 systemd[1]: sshd@1-172.31.24.61:22-147.75.109.163:52468.service: Deactivated successfully. Sep 6 00:03:24.432713 systemd[1]: session-2.scope: Deactivated successfully. Sep 6 00:03:24.433760 systemd-logind[1837]: Session 2 logged out. Waiting for processes to exit. Sep 6 00:03:24.437121 systemd-logind[1837]: Removed session 2. Sep 6 00:03:24.450728 systemd[1]: Started sshd@2-172.31.24.61:22-147.75.109.163:52476.service. Sep 6 00:03:24.627341 sshd[2102]: Accepted publickey for core from 147.75.109.163 port 52476 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:03:24.630550 sshd[2102]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:03:24.638760 systemd-logind[1837]: New session 3 of user core. Sep 6 00:03:24.639732 systemd[1]: Started session-3.scope. Sep 6 00:03:24.768761 sshd[2102]: pam_unix(sshd:session): session closed for user core Sep 6 00:03:24.773648 systemd[1]: sshd@2-172.31.24.61:22-147.75.109.163:52476.service: Deactivated successfully. Sep 6 00:03:24.775686 systemd[1]: session-3.scope: Deactivated successfully. Sep 6 00:03:24.775721 systemd-logind[1837]: Session 3 logged out. Waiting for processes to exit. Sep 6 00:03:24.778448 systemd-logind[1837]: Removed session 3. Sep 6 00:03:24.794163 systemd[1]: Started sshd@3-172.31.24.61:22-147.75.109.163:52480.service. Sep 6 00:03:24.967418 sshd[2109]: Accepted publickey for core from 147.75.109.163 port 52480 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:03:24.970591 sshd[2109]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:03:24.979737 systemd[1]: Started session-4.scope. Sep 6 00:03:24.981343 systemd-logind[1837]: New session 4 of user core. Sep 6 00:03:25.117580 sshd[2109]: pam_unix(sshd:session): session closed for user core Sep 6 00:03:25.123641 systemd[1]: sshd@3-172.31.24.61:22-147.75.109.163:52480.service: Deactivated successfully. Sep 6 00:03:25.126547 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 00:03:25.127884 systemd-logind[1837]: Session 4 logged out. Waiting for processes to exit. Sep 6 00:03:25.130674 systemd-logind[1837]: Removed session 4. Sep 6 00:03:25.143502 systemd[1]: Started sshd@4-172.31.24.61:22-147.75.109.163:52496.service. Sep 6 00:03:25.318898 sshd[2116]: Accepted publickey for core from 147.75.109.163 port 52496 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:03:25.322529 sshd[2116]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:03:25.332484 systemd-logind[1837]: New session 5 of user core. Sep 6 00:03:25.332882 systemd[1]: Started session-5.scope. Sep 6 00:03:25.521712 sudo[2120]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 6 00:03:25.522423 sudo[2120]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 00:03:25.539558 dbus-daemon[1824]: avc: received setenforce notice (enforcing=1) Sep 6 00:03:25.543314 sudo[2120]: pam_unix(sudo:session): session closed for user root Sep 6 00:03:25.571004 sshd[2116]: pam_unix(sshd:session): session closed for user core Sep 6 00:03:25.577922 systemd-logind[1837]: Session 5 logged out. Waiting for processes to exit. Sep 6 00:03:25.578499 systemd[1]: sshd@4-172.31.24.61:22-147.75.109.163:52496.service: Deactivated successfully. Sep 6 00:03:25.580139 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 00:03:25.582132 systemd-logind[1837]: Removed session 5. Sep 6 00:03:25.595356 systemd[1]: Started sshd@5-172.31.24.61:22-147.75.109.163:52506.service. Sep 6 00:03:25.773012 sshd[2124]: Accepted publickey for core from 147.75.109.163 port 52506 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:03:25.776669 sshd[2124]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:03:25.787341 systemd[1]: Started session-6.scope. Sep 6 00:03:25.788311 systemd-logind[1837]: New session 6 of user core. Sep 6 00:03:25.901664 sudo[2129]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 6 00:03:25.902875 sudo[2129]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 00:03:25.909553 sudo[2129]: pam_unix(sudo:session): session closed for user root Sep 6 00:03:25.919670 sudo[2128]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 6 00:03:25.920756 sudo[2128]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 00:03:25.941152 systemd[1]: Stopping audit-rules.service... Sep 6 00:03:25.941000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 6 00:03:25.945182 kernel: kauditd_printk_skb: 65 callbacks suppressed Sep 6 00:03:25.945325 kernel: audit: type=1305 audit(1757117005.941:159): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 6 00:03:25.951128 auditctl[2132]: No rules Sep 6 00:03:25.941000 audit[2132]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd2f11530 a2=420 a3=0 items=0 ppid=1 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:25.952573 systemd[1]: audit-rules.service: Deactivated successfully. Sep 6 00:03:25.953115 systemd[1]: Stopped audit-rules.service. Sep 6 00:03:25.959780 systemd[1]: Starting audit-rules.service... Sep 6 00:03:25.963549 kernel: audit: type=1300 audit(1757117005.941:159): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd2f11530 a2=420 a3=0 items=0 ppid=1 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:25.941000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Sep 6 00:03:25.974289 kernel: audit: type=1327 audit(1757117005.941:159): proctitle=2F7362696E2F617564697463746C002D44 Sep 6 00:03:25.974415 kernel: audit: type=1131 audit(1757117005.951:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:25.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:26.007834 augenrules[2150]: No rules Sep 6 00:03:26.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:26.009573 systemd[1]: Finished audit-rules.service. Sep 6 00:03:26.018754 sudo[2128]: pam_unix(sudo:session): session closed for user root Sep 6 00:03:26.017000 audit[2128]: USER_END pid=2128 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:03:26.030370 kernel: audit: type=1130 audit(1757117006.009:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:26.030507 kernel: audit: type=1106 audit(1757117006.017:162): pid=2128 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:03:26.030557 kernel: audit: type=1104 audit(1757117006.017:163): pid=2128 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:03:26.017000 audit[2128]: CRED_DISP pid=2128 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:03:26.042586 sshd[2124]: pam_unix(sshd:session): session closed for user core Sep 6 00:03:26.043000 audit[2124]: USER_END pid=2124 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:03:26.058403 systemd[1]: sshd@5-172.31.24.61:22-147.75.109.163:52506.service: Deactivated successfully. Sep 6 00:03:26.043000 audit[2124]: CRED_DISP pid=2124 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:03:26.069287 kernel: audit: type=1106 audit(1757117006.043:164): pid=2124 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:03:26.069399 kernel: audit: type=1104 audit(1757117006.043:165): pid=2124 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:03:26.069448 kernel: audit: type=1131 audit(1757117006.057:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.24.61:22-147.75.109.163:52506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:26.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.24.61:22-147.75.109.163:52506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:26.059766 systemd[1]: session-6.scope: Deactivated successfully. Sep 6 00:03:26.068558 systemd-logind[1837]: Session 6 logged out. Waiting for processes to exit. Sep 6 00:03:26.084593 systemd[1]: Started sshd@6-172.31.24.61:22-147.75.109.163:52514.service. Sep 6 00:03:26.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.24.61:22-147.75.109.163:52514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:26.086753 systemd-logind[1837]: Removed session 6. Sep 6 00:03:26.255000 audit[2157]: USER_ACCT pid=2157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:03:26.256977 sshd[2157]: Accepted publickey for core from 147.75.109.163 port 52514 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:03:26.257000 audit[2157]: CRED_ACQ pid=2157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:03:26.258000 audit[2157]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffffa44500 a2=3 a3=1 items=0 ppid=1 pid=2157 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:26.258000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:03:26.260490 sshd[2157]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:03:26.270923 systemd[1]: Started session-7.scope. Sep 6 00:03:26.273514 systemd-logind[1837]: New session 7 of user core. Sep 6 00:03:26.287000 audit[2157]: USER_START pid=2157 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:03:26.291000 audit[2160]: CRED_ACQ pid=2160 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:03:26.387000 audit[2161]: USER_ACCT pid=2161 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:03:26.388452 sudo[2161]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 00:03:26.388000 audit[2161]: CRED_REFR pid=2161 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:03:26.389111 sudo[2161]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 00:03:26.393000 audit[2161]: USER_START pid=2161 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:03:26.478070 systemd[1]: Starting docker.service... Sep 6 00:03:26.612224 env[2171]: time="2025-09-06T00:03:26.612021204Z" level=info msg="Starting up" Sep 6 00:03:26.617966 env[2171]: time="2025-09-06T00:03:26.617870522Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:03:26.617966 env[2171]: time="2025-09-06T00:03:26.617929590Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:03:26.618185 env[2171]: time="2025-09-06T00:03:26.617970856Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:03:26.618185 env[2171]: time="2025-09-06T00:03:26.617997328Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:03:26.623325 env[2171]: time="2025-09-06T00:03:26.623271318Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:03:26.623526 env[2171]: time="2025-09-06T00:03:26.623488928Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:03:26.623699 env[2171]: time="2025-09-06T00:03:26.623652756Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:03:26.623837 env[2171]: time="2025-09-06T00:03:26.623807017Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:03:26.638661 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport109287324-merged.mount: Deactivated successfully. Sep 6 00:03:26.899809 env[2171]: time="2025-09-06T00:03:26.899663307Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 6 00:03:26.900088 env[2171]: time="2025-09-06T00:03:26.900053900Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 6 00:03:26.900786 env[2171]: time="2025-09-06T00:03:26.900731138Z" level=info msg="Loading containers: start." Sep 6 00:03:27.007000 audit[2202]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2202 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:03:27.007000 audit[2202]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffff5e1df70 a2=0 a3=1 items=0 ppid=2171 pid=2202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:27.007000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Sep 6 00:03:27.013000 audit[2204]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2204 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:03:27.013000 audit[2204]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe65c0a00 a2=0 a3=1 items=0 ppid=2171 pid=2204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:27.013000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Sep 6 00:03:27.018000 audit[2206]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2206 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:03:27.018000 audit[2206]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffdbe551a0 a2=0 a3=1 items=0 ppid=2171 pid=2206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:27.018000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 6 00:03:27.023000 audit[2208]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2208 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:03:27.023000 audit[2208]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffebb10180 a2=0 a3=1 items=0 ppid=2171 pid=2208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:27.023000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 6 00:03:27.036000 audit[2210]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2210 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:03:27.036000 audit[2210]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffffeefb460 a2=0 a3=1 items=0 ppid=2171 pid=2210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:27.036000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Sep 6 00:03:27.069000 audit[2215]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=2215 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:03:27.069000 audit[2215]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe89cfb80 a2=0 a3=1 items=0 ppid=2171 pid=2215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:27.069000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Sep 6 00:03:27.081000 audit[2217]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2217 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:03:27.081000 audit[2217]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc82d1010 a2=0 a3=1 items=0 ppid=2171 pid=2217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:27.081000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Sep 6 00:03:27.086000 audit[2219]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2219 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:03:27.086000 audit[2219]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd1ed3dc0 a2=0 a3=1 items=0 ppid=2171 pid=2219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:27.086000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Sep 6 00:03:27.090000 audit[2221]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=2221 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:03:27.090000 audit[2221]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffe29c9490 a2=0 a3=1 items=0 ppid=2171 pid=2221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:27.090000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 6 00:03:27.105000 audit[2225]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=2225 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:03:27.105000 audit[2225]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffd99aaac0 a2=0 a3=1 items=0 ppid=2171 pid=2225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:27.105000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 6 00:03:27.113000 audit[2226]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2226 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:03:27.113000 audit[2226]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffc5dfaf70 a2=0 a3=1 items=0 ppid=2171 pid=2226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:27.113000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 6 00:03:27.132235 kernel: Initializing XFRM netlink socket Sep 6 00:03:27.182231 env[2171]: time="2025-09-06T00:03:27.182132309Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 6 00:03:27.185559 (udev-worker)[2182]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:03:27.229000 audit[2234]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2234 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:03:27.229000 audit[2234]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=ffffd0afeda0 a2=0 a3=1 items=0 ppid=2171 pid=2234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:27.229000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Sep 6 00:03:27.243000 audit[2237]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2237 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:03:27.243000 audit[2237]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffd6786dc0 a2=0 a3=1 items=0 ppid=2171 pid=2237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:27.243000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Sep 6 00:03:27.251000 audit[2240]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2240 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:03:27.251000 audit[2240]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffd65ba740 a2=0 a3=1 items=0 ppid=2171 pid=2240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:27.251000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Sep 6 00:03:27.256000 audit[2242]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2242 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:03:27.256000 audit[2242]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffe48db640 a2=0 a3=1 items=0 ppid=2171 pid=2242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:27.256000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Sep 6 00:03:27.261000 audit[2244]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=2244 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:03:27.261000 audit[2244]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffe8dc90b0 a2=0 a3=1 items=0 ppid=2171 pid=2244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:27.261000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Sep 6 00:03:27.266000 audit[2246]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=2246 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:03:27.266000 audit[2246]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffd6c578a0 a2=0 a3=1 items=0 ppid=2171 pid=2246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:27.266000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Sep 6 00:03:27.270000 audit[2248]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2248 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:03:27.270000 audit[2248]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=ffffe05b0080 a2=0 a3=1 items=0 ppid=2171 pid=2248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:27.270000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Sep 6 00:03:27.299000 audit[2251]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=2251 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:03:27.299000 audit[2251]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffc9587030 a2=0 a3=1 items=0 ppid=2171 pid=2251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:27.299000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Sep 6 00:03:27.304000 audit[2253]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=2253 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:03:27.304000 audit[2253]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffeaeef670 a2=0 a3=1 items=0 ppid=2171 pid=2253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:27.304000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 6 00:03:27.309000 audit[2255]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2255 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:03:27.309000 audit[2255]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffd9b8d710 a2=0 a3=1 items=0 ppid=2171 pid=2255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:27.309000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 6 00:03:27.314000 audit[2257]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2257 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:03:27.314000 audit[2257]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc33b3830 a2=0 a3=1 items=0 ppid=2171 pid=2257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:27.314000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Sep 6 00:03:27.316891 systemd-networkd[1511]: docker0: Link UP Sep 6 00:03:27.331000 audit[2261]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=2261 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:03:27.331000 audit[2261]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd8f44fd0 a2=0 a3=1 items=0 ppid=2171 pid=2261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:27.331000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 6 00:03:27.337000 audit[2262]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2262 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:03:27.337000 audit[2262]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffe350ff40 a2=0 a3=1 items=0 ppid=2171 pid=2262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:27.337000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 6 00:03:27.339238 env[2171]: time="2025-09-06T00:03:27.339150656Z" level=info msg="Loading containers: done." Sep 6 00:03:27.382871 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1192316037-merged.mount: Deactivated successfully. Sep 6 00:03:27.387333 env[2171]: time="2025-09-06T00:03:27.387144484Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 6 00:03:27.387727 env[2171]: time="2025-09-06T00:03:27.387633294Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 6 00:03:27.388034 env[2171]: time="2025-09-06T00:03:27.387975251Z" level=info msg="Daemon has completed initialization" Sep 6 00:03:27.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:27.416515 systemd[1]: Started docker.service. Sep 6 00:03:27.427356 env[2171]: time="2025-09-06T00:03:27.427118277Z" level=info msg="API listen on /run/docker.sock" Sep 6 00:03:28.686716 env[1848]: time="2025-09-06T00:03:28.686659864Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 6 00:03:29.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:29.395666 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 00:03:29.395937 systemd[1]: Stopped kubelet.service. Sep 6 00:03:29.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:29.400953 systemd[1]: Starting kubelet.service... Sep 6 00:03:29.429955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount312059180.mount: Deactivated successfully. Sep 6 00:03:29.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:29.853517 systemd[1]: Started kubelet.service. Sep 6 00:03:29.967329 kubelet[2300]: E0906 00:03:29.967241 2300 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:03:29.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 6 00:03:29.976873 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:03:29.977316 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:03:31.818569 env[1848]: time="2025-09-06T00:03:31.818508191Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:31.821019 env[1848]: time="2025-09-06T00:03:31.820969447Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:31.824388 env[1848]: time="2025-09-06T00:03:31.824336544Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:31.827849 env[1848]: time="2025-09-06T00:03:31.827775706Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:31.829759 env[1848]: time="2025-09-06T00:03:31.829712357Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 6 00:03:31.832613 env[1848]: time="2025-09-06T00:03:31.832540433Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 6 00:03:34.373397 env[1848]: time="2025-09-06T00:03:34.373328885Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:34.375962 env[1848]: time="2025-09-06T00:03:34.375900595Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:34.379440 env[1848]: time="2025-09-06T00:03:34.379381008Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:34.382907 env[1848]: time="2025-09-06T00:03:34.382844910Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:34.384729 env[1848]: time="2025-09-06T00:03:34.384679188Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 6 00:03:34.385647 env[1848]: time="2025-09-06T00:03:34.385595189Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 6 00:03:36.185296 env[1848]: time="2025-09-06T00:03:36.185239826Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:36.188713 env[1848]: time="2025-09-06T00:03:36.188648996Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:36.194099 env[1848]: time="2025-09-06T00:03:36.194051343Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:36.196939 env[1848]: time="2025-09-06T00:03:36.196893132Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:36.199428 env[1848]: time="2025-09-06T00:03:36.199335548Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 6 00:03:36.200180 env[1848]: time="2025-09-06T00:03:36.200137806Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 6 00:03:37.717866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount995794055.mount: Deactivated successfully. Sep 6 00:03:38.618722 env[1848]: time="2025-09-06T00:03:38.618663141Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:38.626722 env[1848]: time="2025-09-06T00:03:38.626611003Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:38.630211 env[1848]: time="2025-09-06T00:03:38.630143858Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:38.640157 env[1848]: time="2025-09-06T00:03:38.640077402Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:38.640773 env[1848]: time="2025-09-06T00:03:38.640699359Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 6 00:03:38.642551 env[1848]: time="2025-09-06T00:03:38.642495664Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 6 00:03:39.347458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2351289290.mount: Deactivated successfully. Sep 6 00:03:40.190331 kernel: kauditd_printk_skb: 88 callbacks suppressed Sep 6 00:03:40.190484 kernel: audit: type=1130 audit(1757117020.176:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:40.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:40.176456 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 6 00:03:40.176803 systemd[1]: Stopped kubelet.service. Sep 6 00:03:40.189486 systemd[1]: Starting kubelet.service... Sep 6 00:03:40.201310 kernel: audit: type=1131 audit(1757117020.176:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:40.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:40.515937 systemd[1]: Started kubelet.service. Sep 6 00:03:40.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:40.531270 kernel: audit: type=1130 audit(1757117020.515:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:40.634444 kubelet[2314]: E0906 00:03:40.634387 2314 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:03:40.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 6 00:03:40.638068 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:03:40.638492 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:03:40.649211 kernel: audit: type=1131 audit(1757117020.637:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 6 00:03:40.873305 env[1848]: time="2025-09-06T00:03:40.872790733Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:40.878910 env[1848]: time="2025-09-06T00:03:40.878846314Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:40.883463 env[1848]: time="2025-09-06T00:03:40.883398568Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:40.887892 env[1848]: time="2025-09-06T00:03:40.887828103Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 6 00:03:40.888613 env[1848]: time="2025-09-06T00:03:40.888069334Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:40.888707 env[1848]: time="2025-09-06T00:03:40.888667064Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 6 00:03:41.444445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3656594432.mount: Deactivated successfully. Sep 6 00:03:41.458292 env[1848]: time="2025-09-06T00:03:41.458214505Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:41.462859 env[1848]: time="2025-09-06T00:03:41.462799303Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:41.466810 env[1848]: time="2025-09-06T00:03:41.466741321Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:41.470651 env[1848]: time="2025-09-06T00:03:41.470588908Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:41.471649 env[1848]: time="2025-09-06T00:03:41.471602584Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 6 00:03:41.472503 env[1848]: time="2025-09-06T00:03:41.472461191Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 6 00:03:42.125407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3106421270.mount: Deactivated successfully. Sep 6 00:03:45.234468 env[1848]: time="2025-09-06T00:03:45.234362409Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:45.246175 env[1848]: time="2025-09-06T00:03:45.246105165Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:45.264573 env[1848]: time="2025-09-06T00:03:45.264484059Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:45.294527 env[1848]: time="2025-09-06T00:03:45.294469156Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:45.297140 env[1848]: time="2025-09-06T00:03:45.297065670Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 6 00:03:46.409803 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 6 00:03:46.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:46.422275 kernel: audit: type=1131 audit(1757117026.409:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:46.603489 amazon-ssm-agent[1820]: 2025-09-06 00:03:46 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Sep 6 00:03:50.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:50.676446 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 6 00:03:50.676780 systemd[1]: Stopped kubelet.service. Sep 6 00:03:50.689099 systemd[1]: Starting kubelet.service... Sep 6 00:03:50.706994 kernel: audit: type=1130 audit(1757117030.675:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:50.707128 kernel: audit: type=1131 audit(1757117030.675:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:50.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:51.031627 systemd[1]: Started kubelet.service. Sep 6 00:03:51.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:51.048218 kernel: audit: type=1130 audit(1757117031.030:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:51.139258 kubelet[2351]: E0906 00:03:51.139169 2351 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:03:51.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 6 00:03:51.142641 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:03:51.143020 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:03:51.154410 kernel: audit: type=1131 audit(1757117031.143:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 6 00:03:52.314915 systemd[1]: Stopped kubelet.service. Sep 6 00:03:52.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:52.320942 systemd[1]: Starting kubelet.service... Sep 6 00:03:52.337237 kernel: audit: type=1130 audit(1757117032.315:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:52.337371 kernel: audit: type=1131 audit(1757117032.315:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:52.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:52.380208 systemd[1]: Reloading. Sep 6 00:03:52.535648 /usr/lib/systemd/system-generators/torcx-generator[2384]: time="2025-09-06T00:03:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:03:52.542373 /usr/lib/systemd/system-generators/torcx-generator[2384]: time="2025-09-06T00:03:52Z" level=info msg="torcx already run" Sep 6 00:03:52.768764 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:03:52.768802 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:03:52.807274 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:03:53.009626 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 6 00:03:53.009846 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 6 00:03:53.010490 systemd[1]: Stopped kubelet.service. Sep 6 00:03:53.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 6 00:03:53.014928 systemd[1]: Starting kubelet.service... Sep 6 00:03:53.024246 kernel: audit: type=1130 audit(1757117033.009:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 6 00:03:53.331841 systemd[1]: Started kubelet.service. Sep 6 00:03:53.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:53.344329 kernel: audit: type=1130 audit(1757117033.331:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:03:53.430379 kubelet[2459]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:03:53.430379 kubelet[2459]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 00:03:53.430379 kubelet[2459]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:03:53.431046 kubelet[2459]: I0906 00:03:53.430486 2459 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:03:54.699118 kubelet[2459]: I0906 00:03:54.699072 2459 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 00:03:54.699777 kubelet[2459]: I0906 00:03:54.699752 2459 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:03:54.700352 kubelet[2459]: I0906 00:03:54.700324 2459 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 00:03:54.746443 kubelet[2459]: E0906 00:03:54.746392 2459 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.24.61:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.24.61:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:03:54.748667 kubelet[2459]: I0906 00:03:54.748623 2459 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:03:54.761169 kubelet[2459]: E0906 00:03:54.761124 2459 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:03:54.761509 kubelet[2459]: I0906 00:03:54.761486 2459 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:03:54.769305 kubelet[2459]: I0906 00:03:54.769259 2459 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:03:54.771851 kubelet[2459]: I0906 00:03:54.771819 2459 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 00:03:54.772370 kubelet[2459]: I0906 00:03:54.772323 2459 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:03:54.772765 kubelet[2459]: I0906 00:03:54.772478 2459 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-61","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 6 00:03:54.773269 kubelet[2459]: I0906 00:03:54.773246 2459 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:03:54.773377 kubelet[2459]: I0906 00:03:54.773357 2459 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 00:03:54.773900 kubelet[2459]: I0906 00:03:54.773879 2459 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:03:54.780725 kubelet[2459]: W0906 00:03:54.780630 2459 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.24.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-61&limit=500&resourceVersion=0": dial tcp 172.31.24.61:6443: connect: connection refused Sep 6 00:03:54.780977 kubelet[2459]: E0906 00:03:54.780943 2459 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.24.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-61&limit=500&resourceVersion=0\": dial tcp 172.31.24.61:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:03:54.781466 kubelet[2459]: I0906 00:03:54.781432 2459 kubelet.go:408] "Attempting to sync node with API server" Sep 6 00:03:54.781582 kubelet[2459]: I0906 00:03:54.781476 2459 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:03:54.781582 kubelet[2459]: I0906 00:03:54.781513 2459 kubelet.go:314] "Adding apiserver pod source" Sep 6 00:03:54.781710 kubelet[2459]: I0906 00:03:54.781673 2459 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:03:54.788645 kubelet[2459]: W0906 00:03:54.788531 2459 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.24.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.24.61:6443: connect: connection refused Sep 6 00:03:54.788821 kubelet[2459]: E0906 00:03:54.788660 2459 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.24.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.61:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:03:54.789150 kubelet[2459]: I0906 00:03:54.789110 2459 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:03:54.790449 kubelet[2459]: I0906 00:03:54.790410 2459 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:03:54.790800 kubelet[2459]: W0906 00:03:54.790768 2459 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 00:03:54.793367 kubelet[2459]: I0906 00:03:54.793169 2459 server.go:1274] "Started kubelet" Sep 6 00:03:54.802550 kubelet[2459]: E0906 00:03:54.800371 2459 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.61:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.61:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-61.186288a22fa1ea45 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-61,UID:ip-172-31-24-61,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-61,},FirstTimestamp:2025-09-06 00:03:54.793134661 +0000 UTC m=+1.448636856,LastTimestamp:2025-09-06 00:03:54.793134661 +0000 UTC m=+1.448636856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-61,}" Sep 6 00:03:54.804436 kubelet[2459]: I0906 00:03:54.804350 2459 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:03:54.805165 kubelet[2459]: I0906 00:03:54.805134 2459 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:03:54.804000 audit[2459]: AVC avc: denied { mac_admin } for pid=2459 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:03:54.805634 kubelet[2459]: I0906 00:03:54.805595 2459 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:03:54.810321 kubelet[2459]: I0906 00:03:54.810274 2459 server.go:449] "Adding debug handlers to kubelet server" Sep 6 00:03:54.813338 kubelet[2459]: I0906 00:03:54.813263 2459 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 6 00:03:54.813496 kubelet[2459]: I0906 00:03:54.813350 2459 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 6 00:03:54.813561 kubelet[2459]: I0906 00:03:54.813509 2459 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:03:54.804000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 00:03:54.817564 kubelet[2459]: I0906 00:03:54.817525 2459 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:03:54.819368 kernel: audit: type=1400 audit(1757117034.804:218): avc: denied { mac_admin } for pid=2459 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:03:54.819457 kernel: audit: type=1401 audit(1757117034.804:218): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 00:03:54.804000 audit[2459]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000bb0b10 a1=40009171e8 a2=4000bb0ae0 a3=25 items=0 ppid=1 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:54.824303 kubelet[2459]: I0906 00:03:54.824275 2459 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 00:03:54.824691 kubelet[2459]: I0906 00:03:54.824667 2459 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 00:03:54.824924 kubelet[2459]: I0906 00:03:54.824905 2459 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:03:54.825811 kubelet[2459]: W0906 00:03:54.825747 2459 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.24.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.61:6443: connect: connection refused Sep 6 00:03:54.826006 kubelet[2459]: E0906 00:03:54.825975 2459 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.24.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.61:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:03:54.826861 kubelet[2459]: E0906 00:03:54.826828 2459 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-24-61\" not found" Sep 6 00:03:54.827125 kubelet[2459]: E0906 00:03:54.827081 2459 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-61?timeout=10s\": dial tcp 172.31.24.61:6443: connect: connection refused" interval="200ms" Sep 6 00:03:54.832626 kernel: audit: type=1300 audit(1757117034.804:218): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000bb0b10 a1=40009171e8 a2=4000bb0ae0 a3=25 items=0 ppid=1 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:54.804000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 00:03:54.835263 kubelet[2459]: I0906 00:03:54.835228 2459 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:03:54.835574 kubelet[2459]: I0906 00:03:54.835543 2459 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:03:54.840389 kubelet[2459]: E0906 00:03:54.840354 2459 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:03:54.840896 kubelet[2459]: I0906 00:03:54.840873 2459 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:03:54.844685 kernel: audit: type=1327 audit(1757117034.804:218): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 00:03:54.844826 kernel: audit: type=1400 audit(1757117034.812:219): avc: denied { mac_admin } for pid=2459 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:03:54.812000 audit[2459]: AVC avc: denied { mac_admin } for pid=2459 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:03:54.812000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 00:03:54.859329 kernel: audit: type=1401 audit(1757117034.812:219): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 00:03:54.812000 audit[2459]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=400094cb80 a1=4000917200 a2=4000bb0ba0 a3=25 items=0 ppid=1 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:54.812000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 00:03:54.844000 audit[2471]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2471 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:03:54.844000 audit[2471]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc076a850 a2=0 a3=1 items=0 ppid=2459 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:54.844000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 6 00:03:54.869000 audit[2473]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2473 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:03:54.869000 audit[2473]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcd37fb20 a2=0 a3=1 items=0 ppid=2459 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:54.869000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 6 00:03:54.873000 audit[2476]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2476 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:03:54.873000 audit[2476]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe5e43d70 a2=0 a3=1 items=0 ppid=2459 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:54.873000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 6 00:03:54.877000 audit[2478]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2478 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:03:54.877000 audit[2478]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd330f190 a2=0 a3=1 items=0 ppid=2459 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:54.877000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 6 00:03:54.893435 kubelet[2459]: I0906 00:03:54.893383 2459 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 00:03:54.893435 kubelet[2459]: I0906 00:03:54.893429 2459 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 00:03:54.893662 kubelet[2459]: I0906 00:03:54.893464 2459 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:03:54.896493 kubelet[2459]: I0906 00:03:54.896438 2459 policy_none.go:49] "None policy: Start" Sep 6 00:03:54.898697 kubelet[2459]: I0906 00:03:54.898613 2459 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 00:03:54.898697 kubelet[2459]: I0906 00:03:54.898698 2459 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:03:54.912800 kubelet[2459]: I0906 00:03:54.912758 2459 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:03:54.911000 audit[2459]: AVC avc: denied { mac_admin } for pid=2459 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:03:54.911000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 00:03:54.911000 audit[2459]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000ddd050 a1=4000dad2a8 a2=4000ddd020 a3=25 items=0 ppid=1 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:54.911000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 00:03:54.913493 kubelet[2459]: I0906 00:03:54.913051 2459 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 6 00:03:54.913774 kubelet[2459]: I0906 00:03:54.913753 2459 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:03:54.913936 kubelet[2459]: I0906 00:03:54.913886 2459 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:03:54.917696 kubelet[2459]: I0906 00:03:54.917663 2459 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:03:54.920446 kubelet[2459]: E0906 00:03:54.920410 2459 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-61\" not found" Sep 6 00:03:54.931000 audit[2482]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2482 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:03:54.931000 audit[2482]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffd99f9380 a2=0 a3=1 items=0 ppid=2459 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:54.931000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Sep 6 00:03:54.933063 kubelet[2459]: I0906 00:03:54.933011 2459 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:03:54.933000 audit[2483]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=2483 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:03:54.933000 audit[2483]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd728f6f0 a2=0 a3=1 items=0 ppid=2459 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:54.933000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 6 00:03:54.933000 audit[2484]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=2484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:03:54.933000 audit[2484]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd88e3eb0 a2=0 a3=1 items=0 ppid=2459 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:54.933000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 6 00:03:54.935964 kubelet[2459]: I0906 00:03:54.935918 2459 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:03:54.936068 kubelet[2459]: I0906 00:03:54.935967 2459 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 00:03:54.936068 kubelet[2459]: I0906 00:03:54.936001 2459 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 00:03:54.936226 kubelet[2459]: E0906 00:03:54.936068 2459 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 6 00:03:54.936000 audit[2486]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=2486 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:03:54.936000 audit[2486]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffe86f290 a2=0 a3=1 items=0 ppid=2459 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:54.936000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 6 00:03:54.936000 audit[2485]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=2485 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:03:54.936000 audit[2485]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdcc936e0 a2=0 a3=1 items=0 ppid=2459 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:54.936000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 6 00:03:54.938000 audit[2488]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=2488 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:03:54.938000 audit[2488]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffc1494dc0 a2=0 a3=1 items=0 ppid=2459 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:54.938000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 6 00:03:54.939000 audit[2487]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=2487 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:03:54.939000 audit[2487]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd88f6030 a2=0 a3=1 items=0 ppid=2459 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:54.939000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 6 00:03:54.941000 audit[2489]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=2489 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:03:54.943439 kubelet[2459]: W0906 00:03:54.943157 2459 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.24.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.61:6443: connect: connection refused Sep 6 00:03:54.943439 kubelet[2459]: E0906 00:03:54.943297 2459 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.24.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.61:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:03:54.941000 audit[2489]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe9a20150 a2=0 a3=1 items=0 ppid=2459 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:03:54.941000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 6 00:03:55.016420 kubelet[2459]: I0906 00:03:55.016241 2459 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-24-61" Sep 6 00:03:55.019383 kubelet[2459]: E0906 00:03:55.019339 2459 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.24.61:6443/api/v1/nodes\": dial tcp 172.31.24.61:6443: connect: connection refused" node="ip-172-31-24-61" Sep 6 00:03:55.028353 kubelet[2459]: E0906 00:03:55.028306 2459 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-61?timeout=10s\": dial tcp 172.31.24.61:6443: connect: connection refused" interval="400ms" Sep 6 00:03:55.127459 kubelet[2459]: I0906 00:03:55.127408 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/41d23ad7427e6a77d1ef2009d030fae9-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-61\" (UID: \"41d23ad7427e6a77d1ef2009d030fae9\") " pod="kube-system/kube-scheduler-ip-172-31-24-61" Sep 6 00:03:55.127744 kubelet[2459]: I0906 00:03:55.127716 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0a4a74544d21041bcf9424798f64b598-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-61\" (UID: \"0a4a74544d21041bcf9424798f64b598\") " pod="kube-system/kube-controller-manager-ip-172-31-24-61" Sep 6 00:03:55.127963 kubelet[2459]: I0906 00:03:55.127934 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a4a74544d21041bcf9424798f64b598-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-61\" (UID: \"0a4a74544d21041bcf9424798f64b598\") " pod="kube-system/kube-controller-manager-ip-172-31-24-61" Sep 6 00:03:55.128144 kubelet[2459]: I0906 00:03:55.128120 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a4a74544d21041bcf9424798f64b598-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-61\" (UID: \"0a4a74544d21041bcf9424798f64b598\") " pod="kube-system/kube-controller-manager-ip-172-31-24-61" Sep 6 00:03:55.128366 kubelet[2459]: I0906 00:03:55.128342 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a4a74544d21041bcf9424798f64b598-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-61\" (UID: \"0a4a74544d21041bcf9424798f64b598\") " pod="kube-system/kube-controller-manager-ip-172-31-24-61" Sep 6 00:03:55.128545 kubelet[2459]: I0906 00:03:55.128518 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a4a74544d21041bcf9424798f64b598-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-61\" (UID: \"0a4a74544d21041bcf9424798f64b598\") " pod="kube-system/kube-controller-manager-ip-172-31-24-61" Sep 6 00:03:55.128733 kubelet[2459]: I0906 00:03:55.128709 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a115c38f311e38bb8178c6a92458e9ce-ca-certs\") pod \"kube-apiserver-ip-172-31-24-61\" (UID: \"a115c38f311e38bb8178c6a92458e9ce\") " pod="kube-system/kube-apiserver-ip-172-31-24-61" Sep 6 00:03:55.128924 kubelet[2459]: I0906 00:03:55.128901 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a115c38f311e38bb8178c6a92458e9ce-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-61\" (UID: \"a115c38f311e38bb8178c6a92458e9ce\") " pod="kube-system/kube-apiserver-ip-172-31-24-61" Sep 6 00:03:55.129104 kubelet[2459]: I0906 00:03:55.129080 2459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a115c38f311e38bb8178c6a92458e9ce-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-61\" (UID: \"a115c38f311e38bb8178c6a92458e9ce\") " pod="kube-system/kube-apiserver-ip-172-31-24-61" Sep 6 00:03:55.222242 kubelet[2459]: I0906 00:03:55.222207 2459 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-24-61" Sep 6 00:03:55.222927 kubelet[2459]: E0906 00:03:55.222882 2459 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.24.61:6443/api/v1/nodes\": dial tcp 172.31.24.61:6443: connect: connection refused" node="ip-172-31-24-61" Sep 6 00:03:55.350137 env[1848]: time="2025-09-06T00:03:55.349425093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-61,Uid:41d23ad7427e6a77d1ef2009d030fae9,Namespace:kube-system,Attempt:0,}" Sep 6 00:03:55.353881 env[1848]: time="2025-09-06T00:03:55.353489553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-61,Uid:a115c38f311e38bb8178c6a92458e9ce,Namespace:kube-system,Attempt:0,}" Sep 6 00:03:55.358347 env[1848]: time="2025-09-06T00:03:55.358281473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-61,Uid:0a4a74544d21041bcf9424798f64b598,Namespace:kube-system,Attempt:0,}" Sep 6 00:03:55.429415 kubelet[2459]: E0906 00:03:55.429350 2459 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-61?timeout=10s\": dial tcp 172.31.24.61:6443: connect: connection refused" interval="800ms" Sep 6 00:03:55.625779 kubelet[2459]: I0906 00:03:55.625106 2459 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-24-61" Sep 6 00:03:55.630991 kubelet[2459]: E0906 00:03:55.630893 2459 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.24.61:6443/api/v1/nodes\": dial tcp 172.31.24.61:6443: connect: connection refused" node="ip-172-31-24-61" Sep 6 00:03:55.634720 kubelet[2459]: W0906 00:03:55.634623 2459 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.24.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-61&limit=500&resourceVersion=0": dial tcp 172.31.24.61:6443: connect: connection refused Sep 6 00:03:55.634970 kubelet[2459]: E0906 00:03:55.634934 2459 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.24.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-61&limit=500&resourceVersion=0\": dial tcp 172.31.24.61:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:03:55.846011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1444790988.mount: Deactivated successfully. Sep 6 00:03:55.856974 env[1848]: time="2025-09-06T00:03:55.856818410Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:55.860411 env[1848]: time="2025-09-06T00:03:55.860347118Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:55.868585 env[1848]: time="2025-09-06T00:03:55.868509310Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:55.872043 env[1848]: time="2025-09-06T00:03:55.871978017Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:55.873472 env[1848]: time="2025-09-06T00:03:55.873409704Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:55.877254 env[1848]: time="2025-09-06T00:03:55.874917696Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:55.880264 env[1848]: time="2025-09-06T00:03:55.879125643Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:55.885287 env[1848]: time="2025-09-06T00:03:55.885229609Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:55.887648 env[1848]: time="2025-09-06T00:03:55.887590562Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:55.890341 env[1848]: time="2025-09-06T00:03:55.890294323Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:55.892099 env[1848]: time="2025-09-06T00:03:55.892033419Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:55.893971 env[1848]: time="2025-09-06T00:03:55.893927770Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:03:55.949934 env[1848]: time="2025-09-06T00:03:55.949832440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:03:55.950171 env[1848]: time="2025-09-06T00:03:55.950124661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:03:55.950377 env[1848]: time="2025-09-06T00:03:55.950320620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:03:55.951001 env[1848]: time="2025-09-06T00:03:55.950914439Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/52201221e1726ac5531235d54c861519849fe171121cbdb702a9ad8a344a3498 pid=2506 runtime=io.containerd.runc.v2 Sep 6 00:03:55.954984 env[1848]: time="2025-09-06T00:03:55.954863245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:03:55.955298 env[1848]: time="2025-09-06T00:03:55.955233345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:03:55.955468 env[1848]: time="2025-09-06T00:03:55.955422460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:03:55.966952 env[1848]: time="2025-09-06T00:03:55.966801338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:03:55.967415 env[1848]: time="2025-09-06T00:03:55.967336718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:03:55.967600 env[1848]: time="2025-09-06T00:03:55.967542955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:03:55.968094 env[1848]: time="2025-09-06T00:03:55.968021865Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6bd0ef6a1977ed847d15b7fab33bdeefbc01e877bdc8cde6ac0a2b043ded691 pid=2530 runtime=io.containerd.runc.v2 Sep 6 00:03:55.970367 env[1848]: time="2025-09-06T00:03:55.968895069Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f10875231056233c597eeb1e4d50be62d9fbf4f78d484d76b2067b8a5e1d4e3 pid=2498 runtime=io.containerd.runc.v2 Sep 6 00:03:56.184430 env[1848]: time="2025-09-06T00:03:56.184365881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-61,Uid:0a4a74544d21041bcf9424798f64b598,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6bd0ef6a1977ed847d15b7fab33bdeefbc01e877bdc8cde6ac0a2b043ded691\"" Sep 6 00:03:56.190599 env[1848]: time="2025-09-06T00:03:56.190528740Z" level=info msg="CreateContainer within sandbox \"d6bd0ef6a1977ed847d15b7fab33bdeefbc01e877bdc8cde6ac0a2b043ded691\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 6 00:03:56.195820 env[1848]: time="2025-09-06T00:03:56.195741290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-61,Uid:a115c38f311e38bb8178c6a92458e9ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"52201221e1726ac5531235d54c861519849fe171121cbdb702a9ad8a344a3498\"" Sep 6 00:03:56.208112 env[1848]: time="2025-09-06T00:03:56.208033425Z" level=info msg="CreateContainer within sandbox \"52201221e1726ac5531235d54c861519849fe171121cbdb702a9ad8a344a3498\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 6 00:03:56.219697 env[1848]: time="2025-09-06T00:03:56.219603478Z" level=info msg="CreateContainer within sandbox \"d6bd0ef6a1977ed847d15b7fab33bdeefbc01e877bdc8cde6ac0a2b043ded691\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"66e244cc7ea3199b67bc9659a4fa4bd1422c4f6cb6290fb7617f0c3d7f3ab8b6\"" Sep 6 00:03:56.221706 kubelet[2459]: E0906 00:03:56.221474 2459 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.61:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.61:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-61.186288a22fa1ea45 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-61,UID:ip-172-31-24-61,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-61,},FirstTimestamp:2025-09-06 00:03:54.793134661 +0000 UTC m=+1.448636856,LastTimestamp:2025-09-06 00:03:54.793134661 +0000 UTC m=+1.448636856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-61,}" Sep 6 00:03:56.224484 env[1848]: time="2025-09-06T00:03:56.224413425Z" level=info msg="StartContainer for \"66e244cc7ea3199b67bc9659a4fa4bd1422c4f6cb6290fb7617f0c3d7f3ab8b6\"" Sep 6 00:03:56.233071 kubelet[2459]: E0906 00:03:56.230804 2459 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-61?timeout=10s\": dial tcp 172.31.24.61:6443: connect: connection refused" interval="1.6s" Sep 6 00:03:56.236865 env[1848]: time="2025-09-06T00:03:56.236785478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-61,Uid:41d23ad7427e6a77d1ef2009d030fae9,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f10875231056233c597eeb1e4d50be62d9fbf4f78d484d76b2067b8a5e1d4e3\"" Sep 6 00:03:56.241131 env[1848]: time="2025-09-06T00:03:56.241046914Z" level=info msg="CreateContainer within sandbox \"52201221e1726ac5531235d54c861519849fe171121cbdb702a9ad8a344a3498\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"67dac125e9838ac31614c3b4c3dff14b0b22e49cc36d893259dd7c1f4a8d2c69\"" Sep 6 00:03:56.242731 env[1848]: time="2025-09-06T00:03:56.241998076Z" level=info msg="StartContainer for \"67dac125e9838ac31614c3b4c3dff14b0b22e49cc36d893259dd7c1f4a8d2c69\"" Sep 6 00:03:56.243297 env[1848]: time="2025-09-06T00:03:56.243237403Z" level=info msg="CreateContainer within sandbox \"4f10875231056233c597eeb1e4d50be62d9fbf4f78d484d76b2067b8a5e1d4e3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 6 00:03:56.264542 env[1848]: time="2025-09-06T00:03:56.264458808Z" level=info msg="CreateContainer within sandbox \"4f10875231056233c597eeb1e4d50be62d9fbf4f78d484d76b2067b8a5e1d4e3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"70f344c0e2f97fa9ea58cd65f13ec6d239a669a0f92e8e44243fe97a3888c050\"" Sep 6 00:03:56.265730 env[1848]: time="2025-09-06T00:03:56.265666845Z" level=info msg="StartContainer for \"70f344c0e2f97fa9ea58cd65f13ec6d239a669a0f92e8e44243fe97a3888c050\"" Sep 6 00:03:56.277641 kubelet[2459]: W0906 00:03:56.277408 2459 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.24.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.24.61:6443: connect: connection refused Sep 6 00:03:56.277641 kubelet[2459]: E0906 00:03:56.277551 2459 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.24.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.61:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:03:56.312263 kubelet[2459]: W0906 00:03:56.309447 2459 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.24.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.61:6443: connect: connection refused Sep 6 00:03:56.312263 kubelet[2459]: E0906 00:03:56.309571 2459 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.24.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.61:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:03:56.429671 kubelet[2459]: W0906 00:03:56.428318 2459 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.24.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.61:6443: connect: connection refused Sep 6 00:03:56.429671 kubelet[2459]: E0906 00:03:56.429483 2459 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.24.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.61:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:03:56.438086 kubelet[2459]: I0906 00:03:56.436965 2459 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-24-61" Sep 6 00:03:56.438086 kubelet[2459]: E0906 00:03:56.437885 2459 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.24.61:6443/api/v1/nodes\": dial tcp 172.31.24.61:6443: connect: connection refused" node="ip-172-31-24-61" Sep 6 00:03:56.508599 env[1848]: time="2025-09-06T00:03:56.508535524Z" level=info msg="StartContainer for \"67dac125e9838ac31614c3b4c3dff14b0b22e49cc36d893259dd7c1f4a8d2c69\" returns successfully" Sep 6 00:03:56.510406 env[1848]: time="2025-09-06T00:03:56.510352034Z" level=info msg="StartContainer for \"66e244cc7ea3199b67bc9659a4fa4bd1422c4f6cb6290fb7617f0c3d7f3ab8b6\" returns successfully" Sep 6 00:03:56.515250 env[1848]: time="2025-09-06T00:03:56.511466525Z" level=info msg="StartContainer for \"70f344c0e2f97fa9ea58cd65f13ec6d239a669a0f92e8e44243fe97a3888c050\" returns successfully" Sep 6 00:03:58.040215 kubelet[2459]: I0906 00:03:58.040162 2459 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-24-61" Sep 6 00:04:00.569696 kubelet[2459]: E0906 00:04:00.569635 2459 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-24-61\" not found" node="ip-172-31-24-61" Sep 6 00:04:00.639807 kubelet[2459]: I0906 00:04:00.639761 2459 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-24-61" Sep 6 00:04:00.789260 kubelet[2459]: I0906 00:04:00.789206 2459 apiserver.go:52] "Watching apiserver" Sep 6 00:04:00.825999 kubelet[2459]: I0906 00:04:00.825830 2459 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 00:04:01.421309 update_engine[1838]: I0906 00:04:01.421247 1838 update_attempter.cc:509] Updating boot flags... Sep 6 00:04:02.847219 systemd[1]: Reloading. Sep 6 00:04:02.973130 /usr/lib/systemd/system-generators/torcx-generator[2856]: time="2025-09-06T00:04:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:04:02.974274 /usr/lib/systemd/system-generators/torcx-generator[2856]: time="2025-09-06T00:04:02Z" level=info msg="torcx already run" Sep 6 00:04:03.165825 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:04:03.166363 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:04:03.209014 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:04:03.438869 systemd[1]: Stopping kubelet.service... Sep 6 00:04:03.464973 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:04:03.465863 systemd[1]: Stopped kubelet.service. Sep 6 00:04:03.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:04:03.468216 kernel: kauditd_printk_skb: 42 callbacks suppressed Sep 6 00:04:03.468302 kernel: audit: type=1131 audit(1757117043.464:233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:04:03.470400 systemd[1]: Starting kubelet.service... Sep 6 00:04:03.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:04:03.846506 systemd[1]: Started kubelet.service. Sep 6 00:04:03.866255 kernel: audit: type=1130 audit(1757117043.846:234): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:04:03.982548 kubelet[2926]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:04:03.983090 kubelet[2926]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 00:04:03.983290 kubelet[2926]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:04:03.983663 kubelet[2926]: I0906 00:04:03.983586 2926 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:04:03.996821 kubelet[2926]: I0906 00:04:03.996748 2926 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 00:04:03.996821 kubelet[2926]: I0906 00:04:03.996801 2926 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:04:03.997896 kubelet[2926]: I0906 00:04:03.997665 2926 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 00:04:04.000597 kubelet[2926]: I0906 00:04:04.000100 2926 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 6 00:04:04.004897 kubelet[2926]: I0906 00:04:04.003334 2926 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:04:04.024817 kubelet[2926]: E0906 00:04:04.024619 2926 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:04:04.025248 kubelet[2926]: I0906 00:04:04.025219 2926 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:04:04.031426 kubelet[2926]: I0906 00:04:04.031370 2926 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:04:04.033642 kubelet[2926]: I0906 00:04:04.032175 2926 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 00:04:04.033642 kubelet[2926]: I0906 00:04:04.032518 2926 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:04:04.033642 kubelet[2926]: I0906 00:04:04.032556 2926 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-61","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 6 00:04:04.033642 kubelet[2926]: I0906 00:04:04.032831 2926 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:04:04.052113 kubelet[2926]: I0906 00:04:04.032851 2926 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 00:04:04.052113 kubelet[2926]: I0906 00:04:04.032913 2926 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:04:04.052113 kubelet[2926]: I0906 00:04:04.033084 2926 kubelet.go:408] "Attempting to sync node with API server" Sep 6 00:04:04.052113 kubelet[2926]: I0906 00:04:04.033107 2926 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:04:04.052113 kubelet[2926]: I0906 00:04:04.034478 2926 kubelet.go:314] "Adding apiserver pod source" Sep 6 00:04:04.052113 kubelet[2926]: I0906 00:04:04.040219 2926 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:04:04.054738 kubelet[2926]: I0906 00:04:04.054674 2926 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:04:04.055664 kubelet[2926]: I0906 00:04:04.055442 2926 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:04:04.058871 kubelet[2926]: I0906 00:04:04.056234 2926 server.go:1274] "Started kubelet" Sep 6 00:04:04.074722 kubelet[2926]: E0906 00:04:04.070624 2926 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:04:04.074722 kubelet[2926]: I0906 00:04:04.070780 2926 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:04:04.074722 kubelet[2926]: I0906 00:04:04.071319 2926 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:04:04.074722 kubelet[2926]: I0906 00:04:04.071416 2926 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:04:04.074722 kubelet[2926]: I0906 00:04:04.072952 2926 server.go:449] "Adding debug handlers to kubelet server" Sep 6 00:04:04.079558 kubelet[2926]: I0906 00:04:04.079513 2926 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 6 00:04:04.079789 kubelet[2926]: I0906 00:04:04.079752 2926 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 6 00:04:04.080914 kubelet[2926]: I0906 00:04:04.080884 2926 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:04:04.079000 audit[2926]: AVC avc: denied { mac_admin } for pid=2926 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:04.086556 kubelet[2926]: I0906 00:04:04.086510 2926 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:04:04.079000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 00:04:04.104447 kernel: audit: type=1400 audit(1757117044.079:235): avc: denied { mac_admin } for pid=2926 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:04.106249 kernel: audit: type=1401 audit(1757117044.079:235): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 00:04:04.106306 kernel: audit: type=1300 audit(1757117044.079:235): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000a10d80 a1=40005c0b70 a2=4000a10d50 a3=25 items=0 ppid=1 pid=2926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:04.079000 audit[2926]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000a10d80 a1=40005c0b70 a2=4000a10d50 a3=25 items=0 ppid=1 pid=2926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:04.107958 kubelet[2926]: I0906 00:04:04.107926 2926 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 00:04:04.109920 kubelet[2926]: I0906 00:04:04.108782 2926 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 00:04:04.110355 kubelet[2926]: I0906 00:04:04.110332 2926 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:04:04.079000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 00:04:04.079000 audit[2926]: AVC avc: denied { mac_admin } for pid=2926 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:04.143350 kernel: audit: type=1327 audit(1757117044.079:235): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 00:04:04.143496 kernel: audit: type=1400 audit(1757117044.079:236): avc: denied { mac_admin } for pid=2926 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:04.144039 kubelet[2926]: I0906 00:04:04.143960 2926 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:04:04.145981 kubelet[2926]: I0906 00:04:04.145931 2926 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:04:04.079000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 00:04:04.154097 kernel: audit: type=1401 audit(1757117044.079:236): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 00:04:04.157888 kernel: audit: type=1300 audit(1757117044.079:236): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000777160 a1=40005c0b88 a2=4000a10e10 a3=25 items=0 ppid=1 pid=2926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:04.079000 audit[2926]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000777160 a1=40005c0b88 a2=4000a10e10 a3=25 items=0 ppid=1 pid=2926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:04.164972 kubelet[2926]: I0906 00:04:04.164941 2926 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:04:04.079000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 00:04:04.186222 kernel: audit: type=1327 audit(1757117044.079:236): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 00:04:04.229607 kubelet[2926]: I0906 00:04:04.229494 2926 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:04:04.238345 kubelet[2926]: I0906 00:04:04.238289 2926 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:04:04.238345 kubelet[2926]: I0906 00:04:04.238340 2926 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 00:04:04.238576 kubelet[2926]: I0906 00:04:04.238374 2926 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 00:04:04.238576 kubelet[2926]: E0906 00:04:04.238448 2926 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:04:04.339056 kubelet[2926]: E0906 00:04:04.339017 2926 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 6 00:04:04.366497 kubelet[2926]: I0906 00:04:04.366329 2926 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 00:04:04.367021 kubelet[2926]: I0906 00:04:04.366710 2926 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 00:04:04.367733 kubelet[2926]: I0906 00:04:04.367709 2926 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:04:04.368572 kubelet[2926]: I0906 00:04:04.368541 2926 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 6 00:04:04.368770 kubelet[2926]: I0906 00:04:04.368730 2926 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 6 00:04:04.368899 kubelet[2926]: I0906 00:04:04.368880 2926 policy_none.go:49] "None policy: Start" Sep 6 00:04:04.370886 kubelet[2926]: I0906 00:04:04.370858 2926 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 00:04:04.371246 kubelet[2926]: I0906 00:04:04.371202 2926 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:04:04.371670 kubelet[2926]: I0906 00:04:04.371646 2926 state_mem.go:75] "Updated machine memory state" Sep 6 00:04:04.375875 kubelet[2926]: I0906 00:04:04.375842 2926 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:04:04.376227 kubelet[2926]: I0906 00:04:04.376159 2926 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 6 00:04:04.376672 kubelet[2926]: I0906 00:04:04.376640 2926 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:04:04.376852 kubelet[2926]: I0906 00:04:04.376789 2926 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:04:04.374000 audit[2926]: AVC avc: denied { mac_admin } for pid=2926 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:04.374000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 00:04:04.374000 audit[2926]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40011b0240 a1=4000f8f758 a2=40011b0210 a3=25 items=0 ppid=1 pid=2926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:04.374000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 00:04:04.379573 kubelet[2926]: I0906 00:04:04.378437 2926 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:04:04.495832 kubelet[2926]: I0906 00:04:04.495794 2926 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-24-61" Sep 6 00:04:04.505164 kubelet[2926]: I0906 00:04:04.505055 2926 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-24-61" Sep 6 00:04:04.505164 kubelet[2926]: I0906 00:04:04.505209 2926 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-24-61" Sep 6 00:04:04.615273 kubelet[2926]: I0906 00:04:04.615170 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0a4a74544d21041bcf9424798f64b598-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-61\" (UID: \"0a4a74544d21041bcf9424798f64b598\") " pod="kube-system/kube-controller-manager-ip-172-31-24-61" Sep 6 00:04:04.615431 kubelet[2926]: I0906 00:04:04.615387 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a4a74544d21041bcf9424798f64b598-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-61\" (UID: \"0a4a74544d21041bcf9424798f64b598\") " pod="kube-system/kube-controller-manager-ip-172-31-24-61" Sep 6 00:04:04.615531 kubelet[2926]: I0906 00:04:04.615483 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/41d23ad7427e6a77d1ef2009d030fae9-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-61\" (UID: \"41d23ad7427e6a77d1ef2009d030fae9\") " pod="kube-system/kube-scheduler-ip-172-31-24-61" Sep 6 00:04:04.615596 kubelet[2926]: I0906 00:04:04.615575 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a115c38f311e38bb8178c6a92458e9ce-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-61\" (UID: \"a115c38f311e38bb8178c6a92458e9ce\") " pod="kube-system/kube-apiserver-ip-172-31-24-61" Sep 6 00:04:04.615696 kubelet[2926]: I0906 00:04:04.615659 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a4a74544d21041bcf9424798f64b598-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-61\" (UID: \"0a4a74544d21041bcf9424798f64b598\") " pod="kube-system/kube-controller-manager-ip-172-31-24-61" Sep 6 00:04:04.615770 kubelet[2926]: I0906 00:04:04.615754 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a4a74544d21041bcf9424798f64b598-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-61\" (UID: \"0a4a74544d21041bcf9424798f64b598\") " pod="kube-system/kube-controller-manager-ip-172-31-24-61" Sep 6 00:04:04.615896 kubelet[2926]: I0906 00:04:04.615836 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a4a74544d21041bcf9424798f64b598-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-61\" (UID: \"0a4a74544d21041bcf9424798f64b598\") " pod="kube-system/kube-controller-manager-ip-172-31-24-61" Sep 6 00:04:04.615981 kubelet[2926]: I0906 00:04:04.615959 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a115c38f311e38bb8178c6a92458e9ce-ca-certs\") pod \"kube-apiserver-ip-172-31-24-61\" (UID: \"a115c38f311e38bb8178c6a92458e9ce\") " pod="kube-system/kube-apiserver-ip-172-31-24-61" Sep 6 00:04:04.616142 kubelet[2926]: I0906 00:04:04.616058 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a115c38f311e38bb8178c6a92458e9ce-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-61\" (UID: \"a115c38f311e38bb8178c6a92458e9ce\") " pod="kube-system/kube-apiserver-ip-172-31-24-61" Sep 6 00:04:05.041615 kubelet[2926]: I0906 00:04:05.041570 2926 apiserver.go:52] "Watching apiserver" Sep 6 00:04:05.110706 kubelet[2926]: I0906 00:04:05.110615 2926 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 00:04:05.315628 kubelet[2926]: E0906 00:04:05.315467 2926 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-24-61\" already exists" pod="kube-system/kube-apiserver-ip-172-31-24-61" Sep 6 00:04:05.355815 kubelet[2926]: I0906 00:04:05.355677 2926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-61" podStartSLOduration=1.3556543300000001 podStartE2EDuration="1.35565433s" podCreationTimestamp="2025-09-06 00:04:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:04:05.341064413 +0000 UTC m=+1.478540217" watchObservedRunningTime="2025-09-06 00:04:05.35565433 +0000 UTC m=+1.493130111" Sep 6 00:04:05.371359 kubelet[2926]: I0906 00:04:05.371257 2926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-61" podStartSLOduration=1.371235486 podStartE2EDuration="1.371235486s" podCreationTimestamp="2025-09-06 00:04:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:04:05.356977381 +0000 UTC m=+1.494453149" watchObservedRunningTime="2025-09-06 00:04:05.371235486 +0000 UTC m=+1.508711266" Sep 6 00:04:05.371563 kubelet[2926]: I0906 00:04:05.371419 2926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-61" podStartSLOduration=1.371409201 podStartE2EDuration="1.371409201s" podCreationTimestamp="2025-09-06 00:04:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:04:05.37093947 +0000 UTC m=+1.508415250" watchObservedRunningTime="2025-09-06 00:04:05.371409201 +0000 UTC m=+1.508884981" Sep 6 00:04:08.157876 kubelet[2926]: I0906 00:04:08.157837 2926 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 6 00:04:08.159210 env[1848]: time="2025-09-06T00:04:08.159123958Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 00:04:08.160252 kubelet[2926]: I0906 00:04:08.160170 2926 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 6 00:04:09.046487 kubelet[2926]: I0906 00:04:09.046438 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f297855-026d-43be-b32b-30fbdd2fdaf6-lib-modules\") pod \"kube-proxy-tnkn2\" (UID: \"5f297855-026d-43be-b32b-30fbdd2fdaf6\") " pod="kube-system/kube-proxy-tnkn2" Sep 6 00:04:09.046773 kubelet[2926]: I0906 00:04:09.046737 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9qtz\" (UniqueName: \"kubernetes.io/projected/5f297855-026d-43be-b32b-30fbdd2fdaf6-kube-api-access-t9qtz\") pod \"kube-proxy-tnkn2\" (UID: \"5f297855-026d-43be-b32b-30fbdd2fdaf6\") " pod="kube-system/kube-proxy-tnkn2" Sep 6 00:04:09.046975 kubelet[2926]: I0906 00:04:09.046950 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5f297855-026d-43be-b32b-30fbdd2fdaf6-kube-proxy\") pod \"kube-proxy-tnkn2\" (UID: \"5f297855-026d-43be-b32b-30fbdd2fdaf6\") " pod="kube-system/kube-proxy-tnkn2" Sep 6 00:04:09.047123 kubelet[2926]: I0906 00:04:09.047098 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f297855-026d-43be-b32b-30fbdd2fdaf6-xtables-lock\") pod \"kube-proxy-tnkn2\" (UID: \"5f297855-026d-43be-b32b-30fbdd2fdaf6\") " pod="kube-system/kube-proxy-tnkn2" Sep 6 00:04:09.159033 kubelet[2926]: I0906 00:04:09.158990 2926 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 6 00:04:09.247969 env[1848]: time="2025-09-06T00:04:09.247349738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tnkn2,Uid:5f297855-026d-43be-b32b-30fbdd2fdaf6,Namespace:kube-system,Attempt:0,}" Sep 6 00:04:09.288002 env[1848]: time="2025-09-06T00:04:09.287893363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:04:09.288380 env[1848]: time="2025-09-06T00:04:09.288309157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:04:09.288572 env[1848]: time="2025-09-06T00:04:09.288523470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:04:09.289644 env[1848]: time="2025-09-06T00:04:09.289544093Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c86b51310b16ac63c441aa7da1bcdc112be8b77b96ebb39ed51e723100c20a0c pid=2974 runtime=io.containerd.runc.v2 Sep 6 00:04:09.353583 kubelet[2926]: I0906 00:04:09.349755 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4369ba7f-d34b-44c2-b8d6-90fc3fea67de-var-lib-calico\") pod \"tigera-operator-58fc44c59b-22x5l\" (UID: \"4369ba7f-d34b-44c2-b8d6-90fc3fea67de\") " pod="tigera-operator/tigera-operator-58fc44c59b-22x5l" Sep 6 00:04:09.353583 kubelet[2926]: I0906 00:04:09.349819 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hddl2\" (UniqueName: \"kubernetes.io/projected/4369ba7f-d34b-44c2-b8d6-90fc3fea67de-kube-api-access-hddl2\") pod \"tigera-operator-58fc44c59b-22x5l\" (UID: \"4369ba7f-d34b-44c2-b8d6-90fc3fea67de\") " pod="tigera-operator/tigera-operator-58fc44c59b-22x5l" Sep 6 00:04:09.414568 env[1848]: time="2025-09-06T00:04:09.414499536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tnkn2,Uid:5f297855-026d-43be-b32b-30fbdd2fdaf6,Namespace:kube-system,Attempt:0,} returns sandbox id \"c86b51310b16ac63c441aa7da1bcdc112be8b77b96ebb39ed51e723100c20a0c\"" Sep 6 00:04:09.421060 env[1848]: time="2025-09-06T00:04:09.420914252Z" level=info msg="CreateContainer within sandbox \"c86b51310b16ac63c441aa7da1bcdc112be8b77b96ebb39ed51e723100c20a0c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 00:04:09.445915 env[1848]: time="2025-09-06T00:04:09.445829648Z" level=info msg="CreateContainer within sandbox \"c86b51310b16ac63c441aa7da1bcdc112be8b77b96ebb39ed51e723100c20a0c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bfeedd86a781855340822c08ca98cf13bf73c0ac1be49a351a32874f617fed89\"" Sep 6 00:04:09.447636 env[1848]: time="2025-09-06T00:04:09.447550888Z" level=info msg="StartContainer for \"bfeedd86a781855340822c08ca98cf13bf73c0ac1be49a351a32874f617fed89\"" Sep 6 00:04:09.575957 env[1848]: time="2025-09-06T00:04:09.575806592Z" level=info msg="StartContainer for \"bfeedd86a781855340822c08ca98cf13bf73c0ac1be49a351a32874f617fed89\" returns successfully" Sep 6 00:04:09.639228 env[1848]: time="2025-09-06T00:04:09.639044457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-22x5l,Uid:4369ba7f-d34b-44c2-b8d6-90fc3fea67de,Namespace:tigera-operator,Attempt:0,}" Sep 6 00:04:09.664648 env[1848]: time="2025-09-06T00:04:09.664490618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:04:09.664890 env[1848]: time="2025-09-06T00:04:09.664614616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:04:09.664890 env[1848]: time="2025-09-06T00:04:09.664858986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:04:09.667244 env[1848]: time="2025-09-06T00:04:09.665517939Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/59b240fab4566e63d90825624e89161dc75199c517b9827e2129f1a7ccce4559 pid=3049 runtime=io.containerd.runc.v2 Sep 6 00:04:09.811985 env[1848]: time="2025-09-06T00:04:09.811927872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-22x5l,Uid:4369ba7f-d34b-44c2-b8d6-90fc3fea67de,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"59b240fab4566e63d90825624e89161dc75199c517b9827e2129f1a7ccce4559\"" Sep 6 00:04:09.815399 env[1848]: time="2025-09-06T00:04:09.815347365Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 6 00:04:09.861386 kernel: kauditd_printk_skb: 4 callbacks suppressed Sep 6 00:04:09.861524 kernel: audit: type=1325 audit(1757117049.854:238): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=3118 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:04:09.854000 audit[3118]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=3118 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:04:09.860000 audit[3119]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=3119 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:04:09.872125 kernel: audit: type=1325 audit(1757117049.860:239): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=3119 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:04:09.860000 audit[3119]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd1735c10 a2=0 a3=1 items=0 ppid=3026 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:09.884278 kernel: audit: type=1300 audit(1757117049.860:239): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd1735c10 a2=0 a3=1 items=0 ppid=3026 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:09.884423 kernel: audit: type=1327 audit(1757117049.860:239): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 6 00:04:09.860000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 6 00:04:09.854000 audit[3118]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffa32f580 a2=0 a3=1 items=0 ppid=3026 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:09.903250 kernel: audit: type=1300 audit(1757117049.854:238): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffa32f580 a2=0 a3=1 items=0 ppid=3026 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:09.854000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 6 00:04:09.909498 kernel: audit: type=1327 audit(1757117049.854:238): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 6 00:04:09.909554 kernel: audit: type=1325 audit(1757117049.865:240): table=nat:40 family=10 entries=1 op=nft_register_chain pid=3122 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:04:09.865000 audit[3122]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=3122 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:04:09.865000 audit[3122]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffef90b8c0 a2=0 a3=1 items=0 ppid=3026 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:09.865000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 6 00:04:09.932557 kernel: audit: type=1300 audit(1757117049.865:240): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffef90b8c0 a2=0 a3=1 items=0 ppid=3026 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:09.932691 kernel: audit: type=1327 audit(1757117049.865:240): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 6 00:04:09.869000 audit[3123]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=3123 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:04:09.938680 kernel: audit: type=1325 audit(1757117049.869:241): table=filter:41 family=10 entries=1 op=nft_register_chain pid=3123 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:04:09.869000 audit[3123]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffec8bc420 a2=0 a3=1 items=0 ppid=3026 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:09.869000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 6 00:04:09.888000 audit[3121]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=3121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:04:09.888000 audit[3121]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd440cfe0 a2=0 a3=1 items=0 ppid=3026 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:09.888000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 6 00:04:09.892000 audit[3124]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=3124 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:04:09.892000 audit[3124]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd0b479a0 a2=0 a3=1 items=0 ppid=3026 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:09.892000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 6 00:04:09.957000 audit[3126]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=3126 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:04:09.957000 audit[3126]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff341d9a0 a2=0 a3=1 items=0 ppid=3026 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:09.957000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 6 00:04:09.963000 audit[3128]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=3128 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:04:09.963000 audit[3128]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffee56f620 a2=0 a3=1 items=0 ppid=3026 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:09.963000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Sep 6 00:04:09.971000 audit[3131]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=3131 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:04:09.971000 audit[3131]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffdf5a8090 a2=0 a3=1 items=0 ppid=3026 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:09.971000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Sep 6 00:04:09.973000 audit[3132]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=3132 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:04:09.973000 audit[3132]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffba47400 a2=0 a3=1 items=0 ppid=3026 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:09.973000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 6 00:04:09.979000 audit[3134]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=3134 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:04:09.979000 audit[3134]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff24ce120 a2=0 a3=1 items=0 ppid=3026 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:09.979000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 6 00:04:09.981000 audit[3135]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=3135 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:04:09.981000 audit[3135]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcff923f0 a2=0 a3=1 items=0 ppid=3026 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:09.981000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 6 00:04:09.988000 audit[3137]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=3137 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:04:09.988000 audit[3137]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffdc18c4e0 a2=0 a3=1 items=0 ppid=3026 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:09.988000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 6 00:04:09.996000 audit[3140]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=3140 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:04:09.996000 audit[3140]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffed6f2690 a2=0 a3=1 items=0 ppid=3026 pid=3140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:09.996000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Sep 6 00:04:09.999000 audit[3141]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=3141 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:04:09.999000 audit[3141]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcb4763a0 a2=0 a3=1 items=0 ppid=3026 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:09.999000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 6 00:04:10.004000 audit[3143]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=3143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:04:10.004000 audit[3143]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffda7bd9a0 a2=0 a3=1 items=0 ppid=3026 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.004000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 6 00:04:10.007000 audit[3144]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=3144 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:04:10.007000 audit[3144]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe96663d0 a2=0 a3=1 items=0 ppid=3026 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.007000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 6 00:04:10.012000 audit[3146]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=3146 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:04:10.012000 audit[3146]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc9da1a00 a2=0 a3=1 items=0 ppid=3026 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.012000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 6 00:04:10.020000 audit[3149]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=3149 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:04:10.020000 audit[3149]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffda707870 a2=0 a3=1 items=0 ppid=3026 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.020000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 6 00:04:10.029000 audit[3152]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=3152 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:04:10.029000 audit[3152]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd7b0ff30 a2=0 a3=1 items=0 ppid=3026 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.029000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 6 00:04:10.035000 audit[3153]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3153 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:04:10.035000 audit[3153]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc5493a20 a2=0 a3=1 items=0 ppid=3026 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.035000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 6 00:04:10.040000 audit[3155]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=3155 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:04:10.040000 audit[3155]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffc34b50c0 a2=0 a3=1 items=0 ppid=3026 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.040000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 6 00:04:10.048000 audit[3158]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=3158 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:04:10.048000 audit[3158]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffca6d38e0 a2=0 a3=1 items=0 ppid=3026 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.048000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 6 00:04:10.053000 audit[3159]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=3159 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:04:10.053000 audit[3159]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff001ca20 a2=0 a3=1 items=0 ppid=3026 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.053000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 6 00:04:10.059000 audit[3161]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=3161 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 00:04:10.059000 audit[3161]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffcebe9b60 a2=0 a3=1 items=0 ppid=3026 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.059000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 6 00:04:10.103000 audit[3167]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=3167 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:04:10.103000 audit[3167]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc72d4f20 a2=0 a3=1 items=0 ppid=3026 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.103000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:04:10.119000 audit[3167]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=3167 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:04:10.119000 audit[3167]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffc72d4f20 a2=0 a3=1 items=0 ppid=3026 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.119000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:04:10.122000 audit[3172]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=3172 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:04:10.122000 audit[3172]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc81d7410 a2=0 a3=1 items=0 ppid=3026 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.122000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 6 00:04:10.127000 audit[3174]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=3174 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:04:10.127000 audit[3174]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd2b1bf70 a2=0 a3=1 items=0 ppid=3026 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.127000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Sep 6 00:04:10.136000 audit[3177]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=3177 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:04:10.136000 audit[3177]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffccab0bb0 a2=0 a3=1 items=0 ppid=3026 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.136000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Sep 6 00:04:10.139000 audit[3178]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=3178 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:04:10.139000 audit[3178]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd7560360 a2=0 a3=1 items=0 ppid=3026 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.139000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 6 00:04:10.144000 audit[3180]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:04:10.144000 audit[3180]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc402ae80 a2=0 a3=1 items=0 ppid=3026 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.144000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 6 00:04:10.149000 audit[3181]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=3181 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:04:10.149000 audit[3181]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc67498a0 a2=0 a3=1 items=0 ppid=3026 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.149000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 6 00:04:10.154000 audit[3183]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:04:10.154000 audit[3183]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffcc844600 a2=0 a3=1 items=0 ppid=3026 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.154000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Sep 6 00:04:10.162000 audit[3186]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=3186 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:04:10.162000 audit[3186]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=fffffd471bd0 a2=0 a3=1 items=0 ppid=3026 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.162000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 6 00:04:10.164000 audit[3187]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=3187 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:04:10.164000 audit[3187]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe949c330 a2=0 a3=1 items=0 ppid=3026 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.164000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 6 00:04:10.177637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount55684912.mount: Deactivated successfully. Sep 6 00:04:10.181000 audit[3189]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=3189 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:04:10.181000 audit[3189]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe701c740 a2=0 a3=1 items=0 ppid=3026 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.181000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 6 00:04:10.187000 audit[3190]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=3190 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:04:10.187000 audit[3190]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffea3fd950 a2=0 a3=1 items=0 ppid=3026 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.187000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 6 00:04:10.193000 audit[3192]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=3192 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:04:10.193000 audit[3192]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc4818020 a2=0 a3=1 items=0 ppid=3026 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.193000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 6 00:04:10.200000 audit[3195]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=3195 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:04:10.200000 audit[3195]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffebbf1c70 a2=0 a3=1 items=0 ppid=3026 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.200000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 6 00:04:10.208000 audit[3198]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=3198 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:04:10.208000 audit[3198]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff65bf330 a2=0 a3=1 items=0 ppid=3026 pid=3198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.208000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Sep 6 00:04:10.214000 audit[3199]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=3199 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:04:10.214000 audit[3199]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe718aa90 a2=0 a3=1 items=0 ppid=3026 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.214000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 6 00:04:10.223000 audit[3201]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=3201 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:04:10.223000 audit[3201]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffe3884240 a2=0 a3=1 items=0 ppid=3026 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.223000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 6 00:04:10.230000 audit[3204]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=3204 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:04:10.230000 audit[3204]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffdfb2c4a0 a2=0 a3=1 items=0 ppid=3026 pid=3204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.230000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 6 00:04:10.232000 audit[3205]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=3205 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:04:10.232000 audit[3205]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdfd47710 a2=0 a3=1 items=0 ppid=3026 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.232000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 6 00:04:10.242000 audit[3207]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=3207 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:04:10.242000 audit[3207]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffe1e879f0 a2=0 a3=1 items=0 ppid=3026 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.242000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 6 00:04:10.245000 audit[3208]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3208 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:04:10.245000 audit[3208]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc3f0ac40 a2=0 a3=1 items=0 ppid=3026 pid=3208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.245000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 6 00:04:10.250000 audit[3210]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3210 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:04:10.250000 audit[3210]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff185eff0 a2=0 a3=1 items=0 ppid=3026 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.250000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 6 00:04:10.258000 audit[3213]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3213 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 00:04:10.258000 audit[3213]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffcecc9500 a2=0 a3=1 items=0 ppid=3026 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.258000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 6 00:04:10.265000 audit[3215]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=3215 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 6 00:04:10.265000 audit[3215]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffc4dddd30 a2=0 a3=1 items=0 ppid=3026 pid=3215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.265000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:04:10.266000 audit[3215]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=3215 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 6 00:04:10.266000 audit[3215]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffc4dddd30 a2=0 a3=1 items=0 ppid=3026 pid=3215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:10.266000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:04:11.197352 kubelet[2926]: I0906 00:04:11.196966 2926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tnkn2" podStartSLOduration=3.196945349 podStartE2EDuration="3.196945349s" podCreationTimestamp="2025-09-06 00:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:04:10.336984294 +0000 UTC m=+6.474460062" watchObservedRunningTime="2025-09-06 00:04:11.196945349 +0000 UTC m=+7.334421105" Sep 6 00:04:11.756281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1388952497.mount: Deactivated successfully. Sep 6 00:04:13.022922 env[1848]: time="2025-09-06T00:04:13.022857867Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:04:13.025778 env[1848]: time="2025-09-06T00:04:13.025713367Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:04:13.028336 env[1848]: time="2025-09-06T00:04:13.028291511Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:04:13.032687 env[1848]: time="2025-09-06T00:04:13.032636787Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 6 00:04:13.032937 env[1848]: time="2025-09-06T00:04:13.031425279Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:04:13.039814 env[1848]: time="2025-09-06T00:04:13.039738958Z" level=info msg="CreateContainer within sandbox \"59b240fab4566e63d90825624e89161dc75199c517b9827e2129f1a7ccce4559\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 6 00:04:13.062735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1747208745.mount: Deactivated successfully. Sep 6 00:04:13.071869 env[1848]: time="2025-09-06T00:04:13.071808248Z" level=info msg="CreateContainer within sandbox \"59b240fab4566e63d90825624e89161dc75199c517b9827e2129f1a7ccce4559\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c83364e7f01497ed0cb0db2086ad11f33d93d518ad01cd1c6209ec7f90815296\"" Sep 6 00:04:13.073266 env[1848]: time="2025-09-06T00:04:13.073136991Z" level=info msg="StartContainer for \"c83364e7f01497ed0cb0db2086ad11f33d93d518ad01cd1c6209ec7f90815296\"" Sep 6 00:04:13.223351 env[1848]: time="2025-09-06T00:04:13.222163731Z" level=info msg="StartContainer for \"c83364e7f01497ed0cb0db2086ad11f33d93d518ad01cd1c6209ec7f90815296\" returns successfully" Sep 6 00:04:13.382178 kubelet[2926]: I0906 00:04:13.381986 2926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-22x5l" podStartSLOduration=1.160669331 podStartE2EDuration="4.381966441s" podCreationTimestamp="2025-09-06 00:04:09 +0000 UTC" firstStartedPulling="2025-09-06 00:04:09.814338014 +0000 UTC m=+5.951813770" lastFinishedPulling="2025-09-06 00:04:13.035635136 +0000 UTC m=+9.173110880" observedRunningTime="2025-09-06 00:04:13.381893582 +0000 UTC m=+9.519369386" watchObservedRunningTime="2025-09-06 00:04:13.381966441 +0000 UTC m=+9.519442221" Sep 6 00:04:16.649574 amazon-ssm-agent[1820]: 2025-09-06 00:04:16 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Sep 6 00:04:20.233554 sudo[2161]: pam_unix(sudo:session): session closed for user root Sep 6 00:04:20.245528 kernel: kauditd_printk_skb: 143 callbacks suppressed Sep 6 00:04:20.245679 kernel: audit: type=1106 audit(1757117060.232:289): pid=2161 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:04:20.232000 audit[2161]: USER_END pid=2161 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:04:20.245000 audit[2161]: CRED_DISP pid=2161 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:04:20.261834 kernel: audit: type=1104 audit(1757117060.245:290): pid=2161 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 00:04:20.271789 sshd[2157]: pam_unix(sshd:session): session closed for user core Sep 6 00:04:20.272000 audit[2157]: USER_END pid=2157 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:04:20.288056 systemd-logind[1837]: Session 7 logged out. Waiting for processes to exit. Sep 6 00:04:20.273000 audit[2157]: CRED_DISP pid=2157 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:04:20.289958 systemd[1]: sshd@6-172.31.24.61:22-147.75.109.163:52514.service: Deactivated successfully. Sep 6 00:04:20.292838 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 00:04:20.299556 kernel: audit: type=1106 audit(1757117060.272:291): pid=2157 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:04:20.299680 kernel: audit: type=1104 audit(1757117060.273:292): pid=2157 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:04:20.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.24.61:22-147.75.109.163:52514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:04:20.311400 kernel: audit: type=1131 audit(1757117060.289:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.24.61:22-147.75.109.163:52514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:04:20.309378 systemd-logind[1837]: Removed session 7. Sep 6 00:04:23.043000 audit[3300]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=3300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:04:23.050226 kernel: audit: type=1325 audit(1757117063.043:294): table=filter:89 family=2 entries=15 op=nft_register_rule pid=3300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:04:23.043000 audit[3300]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffff83746c0 a2=0 a3=1 items=0 ppid=3026 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:23.043000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:04:23.079225 kernel: audit: type=1300 audit(1757117063.043:294): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffff83746c0 a2=0 a3=1 items=0 ppid=3026 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:23.079319 kernel: audit: type=1327 audit(1757117063.043:294): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:04:23.070000 audit[3300]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=3300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:04:23.092317 kernel: audit: type=1325 audit(1757117063.070:295): table=nat:90 family=2 entries=12 op=nft_register_rule pid=3300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:04:23.070000 audit[3300]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff83746c0 a2=0 a3=1 items=0 ppid=3026 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:23.110697 kernel: audit: type=1300 audit(1757117063.070:295): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff83746c0 a2=0 a3=1 items=0 ppid=3026 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:23.070000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:04:23.132000 audit[3302]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=3302 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:04:23.132000 audit[3302]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc9ac9b50 a2=0 a3=1 items=0 ppid=3026 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:23.132000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:04:23.136000 audit[3302]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=3302 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:04:23.136000 audit[3302]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc9ac9b50 a2=0 a3=1 items=0 ppid=3026 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:23.136000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:04:31.025000 audit[3304]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=3304 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:04:31.063600 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 6 00:04:31.063711 kernel: audit: type=1325 audit(1757117071.025:298): table=filter:93 family=2 entries=17 op=nft_register_rule pid=3304 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:04:31.119733 kubelet[2926]: I0906 00:04:31.119679 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grt5x\" (UniqueName: \"kubernetes.io/projected/fc3b92db-3c15-4baf-8532-37e6a9bca7af-kube-api-access-grt5x\") pod \"calico-typha-5d7d6cbb8f-7jvkr\" (UID: \"fc3b92db-3c15-4baf-8532-37e6a9bca7af\") " pod="calico-system/calico-typha-5d7d6cbb8f-7jvkr" Sep 6 00:04:31.120545 kubelet[2926]: I0906 00:04:31.120508 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc3b92db-3c15-4baf-8532-37e6a9bca7af-tigera-ca-bundle\") pod \"calico-typha-5d7d6cbb8f-7jvkr\" (UID: \"fc3b92db-3c15-4baf-8532-37e6a9bca7af\") " pod="calico-system/calico-typha-5d7d6cbb8f-7jvkr" Sep 6 00:04:31.120733 kubelet[2926]: I0906 00:04:31.120705 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fc3b92db-3c15-4baf-8532-37e6a9bca7af-typha-certs\") pod \"calico-typha-5d7d6cbb8f-7jvkr\" (UID: \"fc3b92db-3c15-4baf-8532-37e6a9bca7af\") " pod="calico-system/calico-typha-5d7d6cbb8f-7jvkr" Sep 6 00:04:31.025000 audit[3304]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffec903a10 a2=0 a3=1 items=0 ppid=3026 pid=3304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:31.170227 kernel: audit: type=1300 audit(1757117071.025:298): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffec903a10 a2=0 a3=1 items=0 ppid=3026 pid=3304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:31.025000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:04:31.179238 kernel: audit: type=1327 audit(1757117071.025:298): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:04:31.181000 audit[3304]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=3304 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:04:31.191247 kernel: audit: type=1325 audit(1757117071.181:299): table=nat:94 family=2 entries=12 op=nft_register_rule pid=3304 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:04:31.181000 audit[3304]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffec903a10 a2=0 a3=1 items=0 ppid=3026 pid=3304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:31.181000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:04:31.214413 kernel: audit: type=1300 audit(1757117071.181:299): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffec903a10 a2=0 a3=1 items=0 ppid=3026 pid=3304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:31.214557 kernel: audit: type=1327 audit(1757117071.181:299): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:04:31.244000 audit[3308]: NETFILTER_CFG table=filter:95 family=2 entries=19 op=nft_register_rule pid=3308 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:04:31.267274 kernel: audit: type=1325 audit(1757117071.244:300): table=filter:95 family=2 entries=19 op=nft_register_rule pid=3308 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:04:31.267395 kernel: audit: type=1300 audit(1757117071.244:300): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd7ce9f60 a2=0 a3=1 items=0 ppid=3026 pid=3308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:31.244000 audit[3308]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd7ce9f60 a2=0 a3=1 items=0 ppid=3026 pid=3308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:31.244000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:04:31.289339 kernel: audit: type=1327 audit(1757117071.244:300): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:04:31.254000 audit[3308]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=3308 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:04:31.305492 kernel: audit: type=1325 audit(1757117071.254:301): table=nat:96 family=2 entries=12 op=nft_register_rule pid=3308 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:04:31.254000 audit[3308]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd7ce9f60 a2=0 a3=1 items=0 ppid=3026 pid=3308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:31.254000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:04:31.393699 env[1848]: time="2025-09-06T00:04:31.393515390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d7d6cbb8f-7jvkr,Uid:fc3b92db-3c15-4baf-8532-37e6a9bca7af,Namespace:calico-system,Attempt:0,}" Sep 6 00:04:31.428559 kubelet[2926]: I0906 00:04:31.428493 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3860f5e-9377-485c-99a0-76a1028f7c20-tigera-ca-bundle\") pod \"calico-node-fgtb5\" (UID: \"d3860f5e-9377-485c-99a0-76a1028f7c20\") " pod="calico-system/calico-node-fgtb5" Sep 6 00:04:31.428742 kubelet[2926]: I0906 00:04:31.428595 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3860f5e-9377-485c-99a0-76a1028f7c20-lib-modules\") pod \"calico-node-fgtb5\" (UID: \"d3860f5e-9377-485c-99a0-76a1028f7c20\") " pod="calico-system/calico-node-fgtb5" Sep 6 00:04:31.428742 kubelet[2926]: I0906 00:04:31.428667 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3860f5e-9377-485c-99a0-76a1028f7c20-xtables-lock\") pod \"calico-node-fgtb5\" (UID: \"d3860f5e-9377-485c-99a0-76a1028f7c20\") " pod="calico-system/calico-node-fgtb5" Sep 6 00:04:31.428869 kubelet[2926]: I0906 00:04:31.428712 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d3860f5e-9377-485c-99a0-76a1028f7c20-flexvol-driver-host\") pod \"calico-node-fgtb5\" (UID: \"d3860f5e-9377-485c-99a0-76a1028f7c20\") " pod="calico-system/calico-node-fgtb5" Sep 6 00:04:31.428869 kubelet[2926]: I0906 00:04:31.428786 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d3860f5e-9377-485c-99a0-76a1028f7c20-var-lib-calico\") pod \"calico-node-fgtb5\" (UID: \"d3860f5e-9377-485c-99a0-76a1028f7c20\") " pod="calico-system/calico-node-fgtb5" Sep 6 00:04:31.428869 kubelet[2926]: I0906 00:04:31.428860 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d3860f5e-9377-485c-99a0-76a1028f7c20-cni-bin-dir\") pod \"calico-node-fgtb5\" (UID: \"d3860f5e-9377-485c-99a0-76a1028f7c20\") " pod="calico-system/calico-node-fgtb5" Sep 6 00:04:31.429052 kubelet[2926]: I0906 00:04:31.428929 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d3860f5e-9377-485c-99a0-76a1028f7c20-policysync\") pod \"calico-node-fgtb5\" (UID: \"d3860f5e-9377-485c-99a0-76a1028f7c20\") " pod="calico-system/calico-node-fgtb5" Sep 6 00:04:31.429052 kubelet[2926]: I0906 00:04:31.428973 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d3860f5e-9377-485c-99a0-76a1028f7c20-cni-log-dir\") pod \"calico-node-fgtb5\" (UID: \"d3860f5e-9377-485c-99a0-76a1028f7c20\") " pod="calico-system/calico-node-fgtb5" Sep 6 00:04:31.429199 kubelet[2926]: I0906 00:04:31.429044 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h82k\" (UniqueName: \"kubernetes.io/projected/d3860f5e-9377-485c-99a0-76a1028f7c20-kube-api-access-6h82k\") pod \"calico-node-fgtb5\" (UID: \"d3860f5e-9377-485c-99a0-76a1028f7c20\") " pod="calico-system/calico-node-fgtb5" Sep 6 00:04:31.429199 kubelet[2926]: I0906 00:04:31.429103 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d3860f5e-9377-485c-99a0-76a1028f7c20-var-run-calico\") pod \"calico-node-fgtb5\" (UID: \"d3860f5e-9377-485c-99a0-76a1028f7c20\") " pod="calico-system/calico-node-fgtb5" Sep 6 00:04:31.429477 kubelet[2926]: I0906 00:04:31.429142 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d3860f5e-9377-485c-99a0-76a1028f7c20-cni-net-dir\") pod \"calico-node-fgtb5\" (UID: \"d3860f5e-9377-485c-99a0-76a1028f7c20\") " pod="calico-system/calico-node-fgtb5" Sep 6 00:04:31.429558 kubelet[2926]: I0906 00:04:31.429476 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d3860f5e-9377-485c-99a0-76a1028f7c20-node-certs\") pod \"calico-node-fgtb5\" (UID: \"d3860f5e-9377-485c-99a0-76a1028f7c20\") " pod="calico-system/calico-node-fgtb5" Sep 6 00:04:31.458657 env[1848]: time="2025-09-06T00:04:31.458542841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:04:31.458845 env[1848]: time="2025-09-06T00:04:31.458782481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:04:31.458935 env[1848]: time="2025-09-06T00:04:31.458873370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:04:31.462494 env[1848]: time="2025-09-06T00:04:31.462367403Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d69899ca3a66158a0b4db747833310a0b756c1cbdc5463efe2e60b1a4bbcbe27 pid=3316 runtime=io.containerd.runc.v2 Sep 6 00:04:31.544271 kubelet[2926]: E0906 00:04:31.544219 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.544271 kubelet[2926]: W0906 00:04:31.544261 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.544516 kubelet[2926]: E0906 00:04:31.544331 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.571230 kubelet[2926]: E0906 00:04:31.569611 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zpqph" podUID="8f59dc9d-44f6-4633-8546-11f2219b7da2" Sep 6 00:04:31.575776 kubelet[2926]: E0906 00:04:31.575711 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.576054 kubelet[2926]: W0906 00:04:31.576021 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.576351 kubelet[2926]: E0906 00:04:31.576259 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.599097 kubelet[2926]: E0906 00:04:31.599059 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.599391 kubelet[2926]: W0906 00:04:31.599358 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.599577 kubelet[2926]: E0906 00:04:31.599550 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.600235 kubelet[2926]: E0906 00:04:31.600175 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.600454 kubelet[2926]: W0906 00:04:31.600424 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.600628 kubelet[2926]: E0906 00:04:31.600601 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.601279 kubelet[2926]: E0906 00:04:31.601246 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.601517 kubelet[2926]: W0906 00:04:31.601487 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.601694 kubelet[2926]: E0906 00:04:31.601664 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.602540 kubelet[2926]: E0906 00:04:31.602504 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.602813 kubelet[2926]: W0906 00:04:31.602778 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.603008 kubelet[2926]: E0906 00:04:31.602975 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.604901 kubelet[2926]: E0906 00:04:31.604861 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.605239 kubelet[2926]: W0906 00:04:31.605175 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.605484 kubelet[2926]: E0906 00:04:31.605453 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.608960 kubelet[2926]: E0906 00:04:31.608922 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.609170 kubelet[2926]: W0906 00:04:31.609140 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.609347 kubelet[2926]: E0906 00:04:31.609319 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.609873 kubelet[2926]: E0906 00:04:31.609842 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.610049 kubelet[2926]: W0906 00:04:31.610021 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.610283 kubelet[2926]: E0906 00:04:31.610255 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.610845 kubelet[2926]: E0906 00:04:31.610817 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.611047 kubelet[2926]: W0906 00:04:31.611017 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.611204 kubelet[2926]: E0906 00:04:31.611152 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.614609 kubelet[2926]: E0906 00:04:31.614568 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.614854 kubelet[2926]: W0906 00:04:31.614823 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.614997 kubelet[2926]: E0906 00:04:31.614970 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.618318 kubelet[2926]: E0906 00:04:31.618139 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.618638 kubelet[2926]: W0906 00:04:31.618594 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.618822 kubelet[2926]: E0906 00:04:31.618791 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.620003 kubelet[2926]: E0906 00:04:31.619958 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.620375 kubelet[2926]: W0906 00:04:31.620334 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.620598 kubelet[2926]: E0906 00:04:31.620570 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.621296 kubelet[2926]: E0906 00:04:31.621258 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.621547 kubelet[2926]: W0906 00:04:31.621509 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.621718 kubelet[2926]: E0906 00:04:31.621685 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.622505 kubelet[2926]: E0906 00:04:31.622467 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.622743 kubelet[2926]: W0906 00:04:31.622710 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.622886 kubelet[2926]: E0906 00:04:31.622857 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.623686 kubelet[2926]: E0906 00:04:31.623643 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.623977 kubelet[2926]: W0906 00:04:31.623903 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.624298 kubelet[2926]: E0906 00:04:31.624173 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.625879 kubelet[2926]: E0906 00:04:31.625809 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.626305 kubelet[2926]: W0906 00:04:31.626177 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.626571 kubelet[2926]: E0906 00:04:31.626514 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.634548 kubelet[2926]: E0906 00:04:31.634507 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.634875 kubelet[2926]: W0906 00:04:31.634838 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.635037 kubelet[2926]: E0906 00:04:31.635003 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.635933 kubelet[2926]: E0906 00:04:31.635900 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.636147 kubelet[2926]: W0906 00:04:31.636118 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.636297 kubelet[2926]: E0906 00:04:31.636270 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.641377 kubelet[2926]: E0906 00:04:31.641336 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.641587 kubelet[2926]: W0906 00:04:31.641557 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.641724 kubelet[2926]: E0906 00:04:31.641697 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.642795 kubelet[2926]: E0906 00:04:31.642761 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.643037 kubelet[2926]: W0906 00:04:31.643004 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.643177 kubelet[2926]: E0906 00:04:31.643150 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.646533 kubelet[2926]: E0906 00:04:31.646407 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.647025 kubelet[2926]: W0906 00:04:31.646705 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.647529 kubelet[2926]: E0906 00:04:31.647500 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.652639 kubelet[2926]: E0906 00:04:31.652601 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.652847 kubelet[2926]: W0906 00:04:31.652815 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.653020 kubelet[2926]: E0906 00:04:31.652992 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.653213 kubelet[2926]: I0906 00:04:31.653154 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f59dc9d-44f6-4633-8546-11f2219b7da2-kubelet-dir\") pod \"csi-node-driver-zpqph\" (UID: \"8f59dc9d-44f6-4633-8546-11f2219b7da2\") " pod="calico-system/csi-node-driver-zpqph" Sep 6 00:04:31.656405 kubelet[2926]: E0906 00:04:31.656370 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.661459 kubelet[2926]: W0906 00:04:31.661409 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.661711 kubelet[2926]: E0906 00:04:31.661680 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.662800 kubelet[2926]: E0906 00:04:31.662769 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.663011 kubelet[2926]: W0906 00:04:31.662981 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.663155 kubelet[2926]: E0906 00:04:31.663128 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.671418 kubelet[2926]: E0906 00:04:31.671376 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.682051 kubelet[2926]: W0906 00:04:31.682007 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.682346 kubelet[2926]: E0906 00:04:31.682313 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.682554 kubelet[2926]: I0906 00:04:31.682517 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq7c2\" (UniqueName: \"kubernetes.io/projected/8f59dc9d-44f6-4633-8546-11f2219b7da2-kube-api-access-sq7c2\") pod \"csi-node-driver-zpqph\" (UID: \"8f59dc9d-44f6-4633-8546-11f2219b7da2\") " pod="calico-system/csi-node-driver-zpqph" Sep 6 00:04:31.696690 env[1848]: time="2025-09-06T00:04:31.696607856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fgtb5,Uid:d3860f5e-9377-485c-99a0-76a1028f7c20,Namespace:calico-system,Attempt:0,}" Sep 6 00:04:31.713140 kubelet[2926]: E0906 00:04:31.713066 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.713140 kubelet[2926]: W0906 00:04:31.713119 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.713479 kubelet[2926]: E0906 00:04:31.713158 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.719438 kubelet[2926]: E0906 00:04:31.719375 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.719438 kubelet[2926]: W0906 00:04:31.719424 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.719745 kubelet[2926]: E0906 00:04:31.719659 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.720146 kubelet[2926]: E0906 00:04:31.720099 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.720146 kubelet[2926]: W0906 00:04:31.720139 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.720359 kubelet[2926]: E0906 00:04:31.720170 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.720359 kubelet[2926]: I0906 00:04:31.720300 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8f59dc9d-44f6-4633-8546-11f2219b7da2-registration-dir\") pod \"csi-node-driver-zpqph\" (UID: \"8f59dc9d-44f6-4633-8546-11f2219b7da2\") " pod="calico-system/csi-node-driver-zpqph" Sep 6 00:04:31.723029 kubelet[2926]: E0906 00:04:31.722419 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.723029 kubelet[2926]: W0906 00:04:31.722463 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.723029 kubelet[2926]: E0906 00:04:31.722524 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.723029 kubelet[2926]: I0906 00:04:31.722576 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8f59dc9d-44f6-4633-8546-11f2219b7da2-socket-dir\") pod \"csi-node-driver-zpqph\" (UID: \"8f59dc9d-44f6-4633-8546-11f2219b7da2\") " pod="calico-system/csi-node-driver-zpqph" Sep 6 00:04:31.725264 kubelet[2926]: E0906 00:04:31.723934 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.725264 kubelet[2926]: W0906 00:04:31.723979 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.725264 kubelet[2926]: E0906 00:04:31.724017 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.725264 kubelet[2926]: I0906 00:04:31.724063 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8f59dc9d-44f6-4633-8546-11f2219b7da2-varrun\") pod \"csi-node-driver-zpqph\" (UID: \"8f59dc9d-44f6-4633-8546-11f2219b7da2\") " pod="calico-system/csi-node-driver-zpqph" Sep 6 00:04:31.728589 kubelet[2926]: E0906 00:04:31.728531 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.728589 kubelet[2926]: W0906 00:04:31.728575 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.728815 kubelet[2926]: E0906 00:04:31.728621 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.750442 kubelet[2926]: E0906 00:04:31.750376 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.750442 kubelet[2926]: W0906 00:04:31.750423 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.750778 kubelet[2926]: E0906 00:04:31.750645 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.750967 kubelet[2926]: E0906 00:04:31.750921 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.750967 kubelet[2926]: W0906 00:04:31.750952 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.751175 kubelet[2926]: E0906 00:04:31.751138 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.751471 kubelet[2926]: E0906 00:04:31.751431 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.751471 kubelet[2926]: W0906 00:04:31.751464 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.751635 kubelet[2926]: E0906 00:04:31.751500 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.751881 kubelet[2926]: E0906 00:04:31.751843 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.751881 kubelet[2926]: W0906 00:04:31.751874 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.752028 kubelet[2926]: E0906 00:04:31.751899 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.755843 kubelet[2926]: E0906 00:04:31.752259 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.755843 kubelet[2926]: W0906 00:04:31.752287 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.755843 kubelet[2926]: E0906 00:04:31.752311 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.762334 env[1848]: time="2025-09-06T00:04:31.760724167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:04:31.762334 env[1848]: time="2025-09-06T00:04:31.760805635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:04:31.762334 env[1848]: time="2025-09-06T00:04:31.760832735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:04:31.764297 env[1848]: time="2025-09-06T00:04:31.763135434Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0629f4156aa8104fffd04d639036b780f9eb814deea9c56f707e10d19ffc8e57 pid=3397 runtime=io.containerd.runc.v2 Sep 6 00:04:31.825330 kubelet[2926]: E0906 00:04:31.825275 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.825330 kubelet[2926]: W0906 00:04:31.825318 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.825601 kubelet[2926]: E0906 00:04:31.825353 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.825925 kubelet[2926]: E0906 00:04:31.825871 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.825925 kubelet[2926]: W0906 00:04:31.825909 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.826118 kubelet[2926]: E0906 00:04:31.825950 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.826576 kubelet[2926]: E0906 00:04:31.826530 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.826576 kubelet[2926]: W0906 00:04:31.826568 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.826757 kubelet[2926]: E0906 00:04:31.826610 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.827158 kubelet[2926]: E0906 00:04:31.827114 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.827158 kubelet[2926]: W0906 00:04:31.827151 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.827416 kubelet[2926]: E0906 00:04:31.827380 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.828601 kubelet[2926]: E0906 00:04:31.828344 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.828601 kubelet[2926]: W0906 00:04:31.828386 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.828866 kubelet[2926]: E0906 00:04:31.828605 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.831246 kubelet[2926]: E0906 00:04:31.829328 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.831246 kubelet[2926]: W0906 00:04:31.829366 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.831246 kubelet[2926]: E0906 00:04:31.829568 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.833288 kubelet[2926]: E0906 00:04:31.831885 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.833288 kubelet[2926]: W0906 00:04:31.831924 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.833288 kubelet[2926]: E0906 00:04:31.832142 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.833288 kubelet[2926]: E0906 00:04:31.832451 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.833288 kubelet[2926]: W0906 00:04:31.832476 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.833288 kubelet[2926]: E0906 00:04:31.832664 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.833288 kubelet[2926]: E0906 00:04:31.832974 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.833288 kubelet[2926]: W0906 00:04:31.832992 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.833288 kubelet[2926]: E0906 00:04:31.833137 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.833945 kubelet[2926]: E0906 00:04:31.833403 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.833945 kubelet[2926]: W0906 00:04:31.833425 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.833945 kubelet[2926]: E0906 00:04:31.833593 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.833945 kubelet[2926]: E0906 00:04:31.833806 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.833945 kubelet[2926]: W0906 00:04:31.833822 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.834285 kubelet[2926]: E0906 00:04:31.833966 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.834285 kubelet[2926]: E0906 00:04:31.834155 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.834285 kubelet[2926]: W0906 00:04:31.834172 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.834475 kubelet[2926]: E0906 00:04:31.834376 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.835255 kubelet[2926]: E0906 00:04:31.834577 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.835255 kubelet[2926]: W0906 00:04:31.834607 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.835255 kubelet[2926]: E0906 00:04:31.834748 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.835255 kubelet[2926]: E0906 00:04:31.834966 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.835255 kubelet[2926]: W0906 00:04:31.834983 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.835255 kubelet[2926]: E0906 00:04:31.835133 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.835695 kubelet[2926]: E0906 00:04:31.835410 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.835695 kubelet[2926]: W0906 00:04:31.835429 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.835695 kubelet[2926]: E0906 00:04:31.835569 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.835865 kubelet[2926]: E0906 00:04:31.835777 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.835865 kubelet[2926]: W0906 00:04:31.835792 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.835988 kubelet[2926]: E0906 00:04:31.835936 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.843268 kubelet[2926]: E0906 00:04:31.842365 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.843268 kubelet[2926]: W0906 00:04:31.842412 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.843268 kubelet[2926]: E0906 00:04:31.842630 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.843268 kubelet[2926]: E0906 00:04:31.843086 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.843268 kubelet[2926]: W0906 00:04:31.843109 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.843683 kubelet[2926]: E0906 00:04:31.843326 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.843683 kubelet[2926]: E0906 00:04:31.843561 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.843683 kubelet[2926]: W0906 00:04:31.843579 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.843854 kubelet[2926]: E0906 00:04:31.843764 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.845283 kubelet[2926]: E0906 00:04:31.843991 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.845283 kubelet[2926]: W0906 00:04:31.844023 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.845283 kubelet[2926]: E0906 00:04:31.844240 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.845283 kubelet[2926]: E0906 00:04:31.844810 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.845283 kubelet[2926]: W0906 00:04:31.844836 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.845283 kubelet[2926]: E0906 00:04:31.845067 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.845763 kubelet[2926]: E0906 00:04:31.845412 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.845763 kubelet[2926]: W0906 00:04:31.845434 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.845763 kubelet[2926]: E0906 00:04:31.845605 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.845936 kubelet[2926]: E0906 00:04:31.845830 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.845936 kubelet[2926]: W0906 00:04:31.845847 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.846047 kubelet[2926]: E0906 00:04:31.846028 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.847264 kubelet[2926]: E0906 00:04:31.846387 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.847264 kubelet[2926]: W0906 00:04:31.846422 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.847264 kubelet[2926]: E0906 00:04:31.846456 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.847264 kubelet[2926]: E0906 00:04:31.846878 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.847264 kubelet[2926]: W0906 00:04:31.846900 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.847264 kubelet[2926]: E0906 00:04:31.846923 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:31.880605 kubelet[2926]: E0906 00:04:31.880535 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:31.880605 kubelet[2926]: W0906 00:04:31.880578 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:31.880867 kubelet[2926]: E0906 00:04:31.880614 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:32.086959 env[1848]: time="2025-09-06T00:04:32.086903055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d7d6cbb8f-7jvkr,Uid:fc3b92db-3c15-4baf-8532-37e6a9bca7af,Namespace:calico-system,Attempt:0,} returns sandbox id \"d69899ca3a66158a0b4db747833310a0b756c1cbdc5463efe2e60b1a4bbcbe27\"" Sep 6 00:04:32.095904 env[1848]: time="2025-09-06T00:04:32.095838742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 6 00:04:32.150000 audit[3462]: NETFILTER_CFG table=filter:97 family=2 entries=21 op=nft_register_rule pid=3462 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:04:32.150000 audit[3462]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffc39c46e0 a2=0 a3=1 items=0 ppid=3026 pid=3462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:32.150000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:04:32.157000 audit[3462]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=3462 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:04:32.157000 audit[3462]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc39c46e0 a2=0 a3=1 items=0 ppid=3026 pid=3462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:32.157000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:04:32.218713 env[1848]: time="2025-09-06T00:04:32.218573613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fgtb5,Uid:d3860f5e-9377-485c-99a0-76a1028f7c20,Namespace:calico-system,Attempt:0,} returns sandbox id \"0629f4156aa8104fffd04d639036b780f9eb814deea9c56f707e10d19ffc8e57\"" Sep 6 00:04:33.175000 audit[3470]: NETFILTER_CFG table=filter:99 family=2 entries=22 op=nft_register_rule pid=3470 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:04:33.175000 audit[3470]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffffe8dbf10 a2=0 a3=1 items=0 ppid=3026 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:33.175000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:04:33.179000 audit[3470]: NETFILTER_CFG table=nat:100 family=2 entries=12 op=nft_register_rule pid=3470 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:04:33.179000 audit[3470]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffe8dbf10 a2=0 a3=1 items=0 ppid=3026 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:33.179000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:04:33.239670 kubelet[2926]: E0906 00:04:33.238806 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zpqph" podUID="8f59dc9d-44f6-4633-8546-11f2219b7da2" Sep 6 00:04:33.402971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1025519440.mount: Deactivated successfully. Sep 6 00:04:35.113520 env[1848]: time="2025-09-06T00:04:35.113461346Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:04:35.117886 env[1848]: time="2025-09-06T00:04:35.117831064Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:04:35.121433 env[1848]: time="2025-09-06T00:04:35.121364374Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:04:35.125130 env[1848]: time="2025-09-06T00:04:35.125073737Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:04:35.126246 env[1848]: time="2025-09-06T00:04:35.126152781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 6 00:04:35.133340 env[1848]: time="2025-09-06T00:04:35.132533844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 6 00:04:35.165284 env[1848]: time="2025-09-06T00:04:35.165217428Z" level=info msg="CreateContainer within sandbox \"d69899ca3a66158a0b4db747833310a0b756c1cbdc5463efe2e60b1a4bbcbe27\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 6 00:04:35.192157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2765159884.mount: Deactivated successfully. Sep 6 00:04:35.199579 env[1848]: time="2025-09-06T00:04:35.199479950Z" level=info msg="CreateContainer within sandbox \"d69899ca3a66158a0b4db747833310a0b756c1cbdc5463efe2e60b1a4bbcbe27\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1ba0c44a778455922a627eb34cc5757d2841fec0f5ff1dac965685b485612451\"" Sep 6 00:04:35.201226 env[1848]: time="2025-09-06T00:04:35.201085231Z" level=info msg="StartContainer for \"1ba0c44a778455922a627eb34cc5757d2841fec0f5ff1dac965685b485612451\"" Sep 6 00:04:35.240807 kubelet[2926]: E0906 00:04:35.239095 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zpqph" podUID="8f59dc9d-44f6-4633-8546-11f2219b7da2" Sep 6 00:04:35.366079 env[1848]: time="2025-09-06T00:04:35.365921531Z" level=info msg="StartContainer for \"1ba0c44a778455922a627eb34cc5757d2841fec0f5ff1dac965685b485612451\" returns successfully" Sep 6 00:04:35.481316 kubelet[2926]: E0906 00:04:35.481268 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.481316 kubelet[2926]: W0906 00:04:35.481306 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.481610 kubelet[2926]: E0906 00:04:35.481340 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:35.482500 kubelet[2926]: E0906 00:04:35.481737 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.482500 kubelet[2926]: W0906 00:04:35.481771 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.482500 kubelet[2926]: E0906 00:04:35.481797 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:35.482500 kubelet[2926]: E0906 00:04:35.482434 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.482500 kubelet[2926]: W0906 00:04:35.482503 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.482892 kubelet[2926]: E0906 00:04:35.482526 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:35.483297 kubelet[2926]: E0906 00:04:35.483250 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.483297 kubelet[2926]: W0906 00:04:35.483283 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.483492 kubelet[2926]: E0906 00:04:35.483310 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:35.483707 kubelet[2926]: E0906 00:04:35.483672 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.483707 kubelet[2926]: W0906 00:04:35.483702 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.483879 kubelet[2926]: E0906 00:04:35.483727 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:35.484307 kubelet[2926]: E0906 00:04:35.484256 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.484307 kubelet[2926]: W0906 00:04:35.484290 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.484504 kubelet[2926]: E0906 00:04:35.484323 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:35.484697 kubelet[2926]: E0906 00:04:35.484659 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.484697 kubelet[2926]: W0906 00:04:35.484687 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.484848 kubelet[2926]: E0906 00:04:35.484710 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:35.489010 kubelet[2926]: E0906 00:04:35.488951 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.489010 kubelet[2926]: W0906 00:04:35.488994 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.489329 kubelet[2926]: E0906 00:04:35.489028 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:35.494655 kubelet[2926]: E0906 00:04:35.494606 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.494655 kubelet[2926]: W0906 00:04:35.494645 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.494892 kubelet[2926]: E0906 00:04:35.494679 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:35.495601 kubelet[2926]: E0906 00:04:35.495560 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.495713 kubelet[2926]: W0906 00:04:35.495632 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.495713 kubelet[2926]: E0906 00:04:35.495665 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:35.497694 kubelet[2926]: E0906 00:04:35.497321 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.497694 kubelet[2926]: W0906 00:04:35.497350 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.497694 kubelet[2926]: E0906 00:04:35.497380 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:35.498355 kubelet[2926]: E0906 00:04:35.498007 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.498355 kubelet[2926]: W0906 00:04:35.498031 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.498355 kubelet[2926]: E0906 00:04:35.498055 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:35.498853 kubelet[2926]: E0906 00:04:35.498660 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.498853 kubelet[2926]: W0906 00:04:35.498680 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.498853 kubelet[2926]: E0906 00:04:35.498701 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:35.499318 kubelet[2926]: E0906 00:04:35.499295 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.499604 kubelet[2926]: W0906 00:04:35.499421 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.499604 kubelet[2926]: E0906 00:04:35.499461 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:35.500530 kubelet[2926]: E0906 00:04:35.500501 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.500692 kubelet[2926]: W0906 00:04:35.500666 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.500813 kubelet[2926]: E0906 00:04:35.500788 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:35.564453 kubelet[2926]: E0906 00:04:35.564418 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.564683 kubelet[2926]: W0906 00:04:35.564655 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.564808 kubelet[2926]: E0906 00:04:35.564783 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:35.565414 kubelet[2926]: E0906 00:04:35.565379 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.565706 kubelet[2926]: W0906 00:04:35.565652 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.565893 kubelet[2926]: E0906 00:04:35.565851 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:35.568129 kubelet[2926]: E0906 00:04:35.568089 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.568439 kubelet[2926]: W0906 00:04:35.568403 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.568606 kubelet[2926]: E0906 00:04:35.568580 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:35.571886 kubelet[2926]: E0906 00:04:35.571842 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.572213 kubelet[2926]: W0906 00:04:35.572152 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.572403 kubelet[2926]: E0906 00:04:35.572360 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:35.573017 kubelet[2926]: E0906 00:04:35.572984 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.573243 kubelet[2926]: W0906 00:04:35.573208 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.573791 kubelet[2926]: E0906 00:04:35.573739 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:35.574651 kubelet[2926]: E0906 00:04:35.574617 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.574863 kubelet[2926]: W0906 00:04:35.574835 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.575220 kubelet[2926]: E0906 00:04:35.575153 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:35.575609 kubelet[2926]: E0906 00:04:35.575586 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.575742 kubelet[2926]: W0906 00:04:35.575716 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.575957 kubelet[2926]: E0906 00:04:35.575914 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:35.578438 kubelet[2926]: E0906 00:04:35.578397 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.578731 kubelet[2926]: W0906 00:04:35.578682 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.579016 kubelet[2926]: E0906 00:04:35.578973 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:35.580064 kubelet[2926]: E0906 00:04:35.580026 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.580361 kubelet[2926]: W0906 00:04:35.580324 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.580742 kubelet[2926]: E0906 00:04:35.580712 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:35.581097 kubelet[2926]: E0906 00:04:35.581072 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.581353 kubelet[2926]: W0906 00:04:35.581312 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.581554 kubelet[2926]: E0906 00:04:35.581514 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:35.582316 kubelet[2926]: E0906 00:04:35.582267 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.582533 kubelet[2926]: W0906 00:04:35.582501 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.582742 kubelet[2926]: E0906 00:04:35.582691 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:35.583622 kubelet[2926]: E0906 00:04:35.583588 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.583811 kubelet[2926]: W0906 00:04:35.583779 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.584025 kubelet[2926]: E0906 00:04:35.583981 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:35.587650 kubelet[2926]: E0906 00:04:35.587584 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.587924 kubelet[2926]: W0906 00:04:35.587889 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.588109 kubelet[2926]: E0906 00:04:35.588080 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:35.588757 kubelet[2926]: E0906 00:04:35.588723 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.588945 kubelet[2926]: W0906 00:04:35.588912 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.589087 kubelet[2926]: E0906 00:04:35.589060 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:35.596252 kubelet[2926]: E0906 00:04:35.596205 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.596507 kubelet[2926]: W0906 00:04:35.596475 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.596663 kubelet[2926]: E0906 00:04:35.596635 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:35.597242 kubelet[2926]: E0906 00:04:35.597212 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.597453 kubelet[2926]: W0906 00:04:35.597400 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.597614 kubelet[2926]: E0906 00:04:35.597585 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:35.599850 kubelet[2926]: E0906 00:04:35.599765 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.600228 kubelet[2926]: W0906 00:04:35.600155 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.600518 kubelet[2926]: E0906 00:04:35.600400 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:35.602306 kubelet[2926]: E0906 00:04:35.602253 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:35.602569 kubelet[2926]: W0906 00:04:35.602534 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:35.602709 kubelet[2926]: E0906 00:04:35.602682 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.389033 kubelet[2926]: I0906 00:04:36.388464 2926 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:04:36.411273 kubelet[2926]: E0906 00:04:36.411170 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.411979 kubelet[2926]: W0906 00:04:36.411749 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.411979 kubelet[2926]: E0906 00:04:36.411799 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.412481 kubelet[2926]: E0906 00:04:36.412453 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.412637 kubelet[2926]: W0906 00:04:36.412609 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.412765 kubelet[2926]: E0906 00:04:36.412739 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.413334 kubelet[2926]: E0906 00:04:36.413304 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.413505 kubelet[2926]: W0906 00:04:36.413477 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.413632 kubelet[2926]: E0906 00:04:36.413607 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.414103 kubelet[2926]: E0906 00:04:36.414075 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.414335 kubelet[2926]: W0906 00:04:36.414303 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.414473 kubelet[2926]: E0906 00:04:36.414447 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.415128 kubelet[2926]: E0906 00:04:36.415097 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.415348 kubelet[2926]: W0906 00:04:36.415318 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.415476 kubelet[2926]: E0906 00:04:36.415450 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.415972 kubelet[2926]: E0906 00:04:36.415947 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.416271 kubelet[2926]: W0906 00:04:36.416175 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.416406 kubelet[2926]: E0906 00:04:36.416380 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.416928 kubelet[2926]: E0906 00:04:36.416901 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.417074 kubelet[2926]: W0906 00:04:36.417048 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.417324 kubelet[2926]: E0906 00:04:36.417298 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.417816 kubelet[2926]: E0906 00:04:36.417792 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.417963 kubelet[2926]: W0906 00:04:36.417937 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.418128 kubelet[2926]: E0906 00:04:36.418101 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.418684 kubelet[2926]: E0906 00:04:36.418649 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.418938 kubelet[2926]: W0906 00:04:36.418683 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.418938 kubelet[2926]: E0906 00:04:36.418714 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.419128 kubelet[2926]: E0906 00:04:36.419089 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.419260 kubelet[2926]: W0906 00:04:36.419126 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.419260 kubelet[2926]: E0906 00:04:36.419157 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.419629 kubelet[2926]: E0906 00:04:36.419596 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.419771 kubelet[2926]: W0906 00:04:36.419629 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.419771 kubelet[2926]: E0906 00:04:36.419658 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.420043 kubelet[2926]: E0906 00:04:36.420013 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.420125 kubelet[2926]: W0906 00:04:36.420043 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.420125 kubelet[2926]: E0906 00:04:36.420068 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.420510 kubelet[2926]: E0906 00:04:36.420479 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.420639 kubelet[2926]: W0906 00:04:36.420509 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.420639 kubelet[2926]: E0906 00:04:36.420535 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.420895 kubelet[2926]: E0906 00:04:36.420866 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.420968 kubelet[2926]: W0906 00:04:36.420895 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.420968 kubelet[2926]: E0906 00:04:36.420917 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.421301 kubelet[2926]: E0906 00:04:36.421271 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.421420 kubelet[2926]: W0906 00:04:36.421305 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.421420 kubelet[2926]: E0906 00:04:36.421332 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.482899 kubelet[2926]: E0906 00:04:36.482765 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.482899 kubelet[2926]: W0906 00:04:36.482807 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.482899 kubelet[2926]: E0906 00:04:36.482859 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.483341 kubelet[2926]: E0906 00:04:36.483280 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.483341 kubelet[2926]: W0906 00:04:36.483315 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.483530 kubelet[2926]: E0906 00:04:36.483342 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.483749 kubelet[2926]: E0906 00:04:36.483702 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.483749 kubelet[2926]: W0906 00:04:36.483738 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.483899 kubelet[2926]: E0906 00:04:36.483764 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.484214 kubelet[2926]: E0906 00:04:36.484157 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.484214 kubelet[2926]: W0906 00:04:36.484210 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.484420 kubelet[2926]: E0906 00:04:36.484239 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.484636 kubelet[2926]: E0906 00:04:36.484588 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.484636 kubelet[2926]: W0906 00:04:36.484624 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.484798 kubelet[2926]: E0906 00:04:36.484648 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.485570 kubelet[2926]: E0906 00:04:36.485467 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.485570 kubelet[2926]: W0906 00:04:36.485563 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.485764 kubelet[2926]: E0906 00:04:36.485595 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.486073 kubelet[2926]: E0906 00:04:36.486023 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.486073 kubelet[2926]: W0906 00:04:36.486060 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.486272 kubelet[2926]: E0906 00:04:36.486088 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.487019 kubelet[2926]: E0906 00:04:36.486984 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.487019 kubelet[2926]: W0906 00:04:36.487017 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.487295 kubelet[2926]: E0906 00:04:36.487250 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.487545 kubelet[2926]: E0906 00:04:36.487516 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.487647 kubelet[2926]: W0906 00:04:36.487545 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.487748 kubelet[2926]: E0906 00:04:36.487713 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.488004 kubelet[2926]: E0906 00:04:36.487957 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.488004 kubelet[2926]: W0906 00:04:36.487989 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.488168 kubelet[2926]: E0906 00:04:36.488028 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.488533 kubelet[2926]: E0906 00:04:36.488500 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.488716 kubelet[2926]: W0906 00:04:36.488536 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.488716 kubelet[2926]: E0906 00:04:36.488572 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.489342 kubelet[2926]: E0906 00:04:36.488976 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.489342 kubelet[2926]: W0906 00:04:36.489031 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.489342 kubelet[2926]: E0906 00:04:36.489237 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.489849 kubelet[2926]: E0906 00:04:36.489815 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.489935 kubelet[2926]: W0906 00:04:36.489847 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.489935 kubelet[2926]: E0906 00:04:36.489912 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.490651 kubelet[2926]: E0906 00:04:36.490613 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.490791 kubelet[2926]: W0906 00:04:36.490679 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.491104 kubelet[2926]: E0906 00:04:36.490722 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.491538 kubelet[2926]: E0906 00:04:36.491503 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.491643 kubelet[2926]: W0906 00:04:36.491556 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.491643 kubelet[2926]: E0906 00:04:36.491601 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.492394 kubelet[2926]: E0906 00:04:36.492355 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.493409 kubelet[2926]: W0906 00:04:36.492572 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.493646 kubelet[2926]: E0906 00:04:36.493614 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.494148 kubelet[2926]: E0906 00:04:36.494108 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.494148 kubelet[2926]: W0906 00:04:36.494144 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.494382 kubelet[2926]: E0906 00:04:36.494178 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.494713 kubelet[2926]: E0906 00:04:36.494679 2926 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 00:04:36.494819 kubelet[2926]: W0906 00:04:36.494713 2926 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 00:04:36.494819 kubelet[2926]: E0906 00:04:36.494741 2926 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 00:04:36.938556 env[1848]: time="2025-09-06T00:04:36.938472479Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:04:36.944359 env[1848]: time="2025-09-06T00:04:36.944293903Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:04:36.949387 env[1848]: time="2025-09-06T00:04:36.949302564Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:04:36.954092 env[1848]: time="2025-09-06T00:04:36.954029608Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:04:36.956303 env[1848]: time="2025-09-06T00:04:36.955516225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 6 00:04:36.963268 env[1848]: time="2025-09-06T00:04:36.963034090Z" level=info msg="CreateContainer within sandbox \"0629f4156aa8104fffd04d639036b780f9eb814deea9c56f707e10d19ffc8e57\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 6 00:04:37.004434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3305738711.mount: Deactivated successfully. Sep 6 00:04:37.022693 env[1848]: time="2025-09-06T00:04:37.022613631Z" level=info msg="CreateContainer within sandbox \"0629f4156aa8104fffd04d639036b780f9eb814deea9c56f707e10d19ffc8e57\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a7c83858f5479776ae7ce938d240088ebc63083f976f7e0c8ae1af13b6559517\"" Sep 6 00:04:37.023899 env[1848]: time="2025-09-06T00:04:37.023812458Z" level=info msg="StartContainer for \"a7c83858f5479776ae7ce938d240088ebc63083f976f7e0c8ae1af13b6559517\"" Sep 6 00:04:37.178281 env[1848]: time="2025-09-06T00:04:37.178101267Z" level=info msg="StartContainer for \"a7c83858f5479776ae7ce938d240088ebc63083f976f7e0c8ae1af13b6559517\" returns successfully" Sep 6 00:04:37.239112 kubelet[2926]: E0906 00:04:37.238770 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zpqph" podUID="8f59dc9d-44f6-4633-8546-11f2219b7da2" Sep 6 00:04:37.273550 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7c83858f5479776ae7ce938d240088ebc63083f976f7e0c8ae1af13b6559517-rootfs.mount: Deactivated successfully. Sep 6 00:04:37.428393 kubelet[2926]: I0906 00:04:37.428304 2926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5d7d6cbb8f-7jvkr" podStartSLOduration=3.392923306 podStartE2EDuration="6.428278369s" podCreationTimestamp="2025-09-06 00:04:31 +0000 UTC" firstStartedPulling="2025-09-06 00:04:32.095103023 +0000 UTC m=+28.232578779" lastFinishedPulling="2025-09-06 00:04:35.130458074 +0000 UTC m=+31.267933842" observedRunningTime="2025-09-06 00:04:35.410790405 +0000 UTC m=+31.548266185" watchObservedRunningTime="2025-09-06 00:04:37.428278369 +0000 UTC m=+33.565754125" Sep 6 00:04:37.652990 env[1848]: time="2025-09-06T00:04:37.652518557Z" level=info msg="shim disconnected" id=a7c83858f5479776ae7ce938d240088ebc63083f976f7e0c8ae1af13b6559517 Sep 6 00:04:37.652990 env[1848]: time="2025-09-06T00:04:37.652781308Z" level=warning msg="cleaning up after shim disconnected" id=a7c83858f5479776ae7ce938d240088ebc63083f976f7e0c8ae1af13b6559517 namespace=k8s.io Sep 6 00:04:37.652990 env[1848]: time="2025-09-06T00:04:37.652866891Z" level=info msg="cleaning up dead shim" Sep 6 00:04:37.670012 env[1848]: time="2025-09-06T00:04:37.669923044Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:04:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3631 runtime=io.containerd.runc.v2\n" Sep 6 00:04:38.404926 env[1848]: time="2025-09-06T00:04:38.404845466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 6 00:04:39.240580 kubelet[2926]: E0906 00:04:39.239725 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zpqph" podUID="8f59dc9d-44f6-4633-8546-11f2219b7da2" Sep 6 00:04:41.239203 kubelet[2926]: E0906 00:04:41.239104 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zpqph" podUID="8f59dc9d-44f6-4633-8546-11f2219b7da2" Sep 6 00:04:43.196231 env[1848]: time="2025-09-06T00:04:43.196136918Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:04:43.199659 env[1848]: time="2025-09-06T00:04:43.199579515Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:04:43.204389 env[1848]: time="2025-09-06T00:04:43.204335696Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:04:43.206535 env[1848]: time="2025-09-06T00:04:43.206477322Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:04:43.209718 env[1848]: time="2025-09-06T00:04:43.209645385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 6 00:04:43.216138 env[1848]: time="2025-09-06T00:04:43.215596769Z" level=info msg="CreateContainer within sandbox \"0629f4156aa8104fffd04d639036b780f9eb814deea9c56f707e10d19ffc8e57\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 6 00:04:43.242700 kubelet[2926]: E0906 00:04:43.240742 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zpqph" podUID="8f59dc9d-44f6-4633-8546-11f2219b7da2" Sep 6 00:04:43.244455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3308915192.mount: Deactivated successfully. Sep 6 00:04:43.253056 env[1848]: time="2025-09-06T00:04:43.252991540Z" level=info msg="CreateContainer within sandbox \"0629f4156aa8104fffd04d639036b780f9eb814deea9c56f707e10d19ffc8e57\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6efc486760d27b8732390592565ab0d734181fa6d1e9c04f517bca7b4fb8a1e3\"" Sep 6 00:04:43.255824 env[1848]: time="2025-09-06T00:04:43.254591996Z" level=info msg="StartContainer for \"6efc486760d27b8732390592565ab0d734181fa6d1e9c04f517bca7b4fb8a1e3\"" Sep 6 00:04:43.316150 systemd[1]: run-containerd-runc-k8s.io-6efc486760d27b8732390592565ab0d734181fa6d1e9c04f517bca7b4fb8a1e3-runc.s3b2RC.mount: Deactivated successfully. Sep 6 00:04:43.418441 env[1848]: time="2025-09-06T00:04:43.418387168Z" level=info msg="StartContainer for \"6efc486760d27b8732390592565ab0d734181fa6d1e9c04f517bca7b4fb8a1e3\" returns successfully" Sep 6 00:04:44.178351 kubelet[2926]: I0906 00:04:44.178289 2926 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:04:44.253000 audit[3682]: NETFILTER_CFG table=filter:101 family=2 entries=21 op=nft_register_rule pid=3682 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:04:44.256270 kernel: kauditd_printk_skb: 14 callbacks suppressed Sep 6 00:04:44.256429 kernel: audit: type=1325 audit(1757117084.253:306): table=filter:101 family=2 entries=21 op=nft_register_rule pid=3682 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:04:44.253000 audit[3682]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcd886140 a2=0 a3=1 items=0 ppid=3026 pid=3682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:44.275415 kernel: audit: type=1300 audit(1757117084.253:306): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcd886140 a2=0 a3=1 items=0 ppid=3026 pid=3682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:44.253000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:04:44.283674 kernel: audit: type=1327 audit(1757117084.253:306): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:04:44.283808 kernel: audit: type=1325 audit(1757117084.277:307): table=nat:102 family=2 entries=19 op=nft_register_chain pid=3682 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:04:44.277000 audit[3682]: NETFILTER_CFG table=nat:102 family=2 entries=19 op=nft_register_chain pid=3682 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:04:44.277000 audit[3682]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffcd886140 a2=0 a3=1 items=0 ppid=3026 pid=3682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:44.277000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:04:44.310215 kernel: audit: type=1300 audit(1757117084.277:307): arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffcd886140 a2=0 a3=1 items=0 ppid=3026 pid=3682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:44.310351 kernel: audit: type=1327 audit(1757117084.277:307): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:04:44.724945 env[1848]: time="2025-09-06T00:04:44.724845481Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:04:44.766507 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6efc486760d27b8732390592565ab0d734181fa6d1e9c04f517bca7b4fb8a1e3-rootfs.mount: Deactivated successfully. Sep 6 00:04:44.800846 kubelet[2926]: I0906 00:04:44.799983 2926 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 6 00:04:44.927319 kubelet[2926]: W0906 00:04:44.927272 2926 reflector.go:561] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ip-172-31-24-61" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-24-61' and this object Sep 6 00:04:44.938123 kubelet[2926]: E0906 00:04:44.937393 2926 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ip-172-31-24-61\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ip-172-31-24-61' and this object" logger="UnhandledError" Sep 6 00:04:44.959040 kubelet[2926]: I0906 00:04:44.958934 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh8tg\" (UniqueName: \"kubernetes.io/projected/482c659b-c6e7-44f9-a152-a4d1a37c30db-kube-api-access-rh8tg\") pod \"whisker-b6bdf98bc-kchk5\" (UID: \"482c659b-c6e7-44f9-a152-a4d1a37c30db\") " pod="calico-system/whisker-b6bdf98bc-kchk5" Sep 6 00:04:44.959276 kubelet[2926]: I0906 00:04:44.959103 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgrm7\" (UniqueName: \"kubernetes.io/projected/92c99ea2-eb5a-48a9-9626-85e265ce8b17-kube-api-access-cgrm7\") pod \"calico-kube-controllers-764999b789-c2t8l\" (UID: \"92c99ea2-eb5a-48a9-9626-85e265ce8b17\") " pod="calico-system/calico-kube-controllers-764999b789-c2t8l" Sep 6 00:04:44.959276 kubelet[2926]: I0906 00:04:44.959246 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/46f8567e-23c4-451e-82e1-50ad6fac0a71-goldmane-key-pair\") pod \"goldmane-7988f88666-8wwqq\" (UID: \"46f8567e-23c4-451e-82e1-50ad6fac0a71\") " pod="calico-system/goldmane-7988f88666-8wwqq" Sep 6 00:04:44.959460 kubelet[2926]: I0906 00:04:44.959336 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d26p\" (UniqueName: \"kubernetes.io/projected/1c9fcc46-cf36-4206-bdd4-bb1b1b8568f2-kube-api-access-8d26p\") pod \"calico-apiserver-797f9c6c85-hw6d9\" (UID: \"1c9fcc46-cf36-4206-bdd4-bb1b1b8568f2\") " pod="calico-apiserver/calico-apiserver-797f9c6c85-hw6d9" Sep 6 00:04:44.959460 kubelet[2926]: I0906 00:04:44.959427 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46f8567e-23c4-451e-82e1-50ad6fac0a71-goldmane-ca-bundle\") pod \"goldmane-7988f88666-8wwqq\" (UID: \"46f8567e-23c4-451e-82e1-50ad6fac0a71\") " pod="calico-system/goldmane-7988f88666-8wwqq" Sep 6 00:04:44.959609 kubelet[2926]: I0906 00:04:44.959469 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92c99ea2-eb5a-48a9-9626-85e265ce8b17-tigera-ca-bundle\") pod \"calico-kube-controllers-764999b789-c2t8l\" (UID: \"92c99ea2-eb5a-48a9-9626-85e265ce8b17\") " pod="calico-system/calico-kube-controllers-764999b789-c2t8l" Sep 6 00:04:44.959609 kubelet[2926]: I0906 00:04:44.959553 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7nsz\" (UniqueName: \"kubernetes.io/projected/d3def65b-39c6-44ff-8623-e889bb6d6c02-kube-api-access-v7nsz\") pod \"calico-apiserver-797f9c6c85-zgcsm\" (UID: \"d3def65b-39c6-44ff-8623-e889bb6d6c02\") " pod="calico-apiserver/calico-apiserver-797f9c6c85-zgcsm" Sep 6 00:04:44.959741 kubelet[2926]: I0906 00:04:44.959630 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d3def65b-39c6-44ff-8623-e889bb6d6c02-calico-apiserver-certs\") pod \"calico-apiserver-797f9c6c85-zgcsm\" (UID: \"d3def65b-39c6-44ff-8623-e889bb6d6c02\") " pod="calico-apiserver/calico-apiserver-797f9c6c85-zgcsm" Sep 6 00:04:44.959741 kubelet[2926]: I0906 00:04:44.959712 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46f8567e-23c4-451e-82e1-50ad6fac0a71-config\") pod \"goldmane-7988f88666-8wwqq\" (UID: \"46f8567e-23c4-451e-82e1-50ad6fac0a71\") " pod="calico-system/goldmane-7988f88666-8wwqq" Sep 6 00:04:44.960008 kubelet[2926]: I0906 00:04:44.959927 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwsrz\" (UniqueName: \"kubernetes.io/projected/4dca1103-4d65-4dd5-9f1d-b159dc97b5c8-kube-api-access-lwsrz\") pod \"coredns-7c65d6cfc9-dz8br\" (UID: \"4dca1103-4d65-4dd5-9f1d-b159dc97b5c8\") " pod="kube-system/coredns-7c65d6cfc9-dz8br" Sep 6 00:04:44.960112 kubelet[2926]: I0906 00:04:44.960070 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/482c659b-c6e7-44f9-a152-a4d1a37c30db-whisker-backend-key-pair\") pod \"whisker-b6bdf98bc-kchk5\" (UID: \"482c659b-c6e7-44f9-a152-a4d1a37c30db\") " pod="calico-system/whisker-b6bdf98bc-kchk5" Sep 6 00:04:44.960493 kubelet[2926]: I0906 00:04:44.960446 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swh2h\" (UniqueName: \"kubernetes.io/projected/695d8e71-3eef-4ad5-94de-576fc3ef9397-kube-api-access-swh2h\") pod \"coredns-7c65d6cfc9-mt5pq\" (UID: \"695d8e71-3eef-4ad5-94de-576fc3ef9397\") " pod="kube-system/coredns-7c65d6cfc9-mt5pq" Sep 6 00:04:44.960723 kubelet[2926]: I0906 00:04:44.960657 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/695d8e71-3eef-4ad5-94de-576fc3ef9397-config-volume\") pod \"coredns-7c65d6cfc9-mt5pq\" (UID: \"695d8e71-3eef-4ad5-94de-576fc3ef9397\") " pod="kube-system/coredns-7c65d6cfc9-mt5pq" Sep 6 00:04:44.960905 kubelet[2926]: I0906 00:04:44.960865 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4dca1103-4d65-4dd5-9f1d-b159dc97b5c8-config-volume\") pod \"coredns-7c65d6cfc9-dz8br\" (UID: \"4dca1103-4d65-4dd5-9f1d-b159dc97b5c8\") " pod="kube-system/coredns-7c65d6cfc9-dz8br" Sep 6 00:04:44.963782 kubelet[2926]: I0906 00:04:44.963608 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/482c659b-c6e7-44f9-a152-a4d1a37c30db-whisker-ca-bundle\") pod \"whisker-b6bdf98bc-kchk5\" (UID: \"482c659b-c6e7-44f9-a152-a4d1a37c30db\") " pod="calico-system/whisker-b6bdf98bc-kchk5" Sep 6 00:04:44.963997 kubelet[2926]: I0906 00:04:44.963892 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1c9fcc46-cf36-4206-bdd4-bb1b1b8568f2-calico-apiserver-certs\") pod \"calico-apiserver-797f9c6c85-hw6d9\" (UID: \"1c9fcc46-cf36-4206-bdd4-bb1b1b8568f2\") " pod="calico-apiserver/calico-apiserver-797f9c6c85-hw6d9" Sep 6 00:04:44.964084 kubelet[2926]: I0906 00:04:44.964060 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q82md\" (UniqueName: \"kubernetes.io/projected/46f8567e-23c4-451e-82e1-50ad6fac0a71-kube-api-access-q82md\") pod \"goldmane-7988f88666-8wwqq\" (UID: \"46f8567e-23c4-451e-82e1-50ad6fac0a71\") " pod="calico-system/goldmane-7988f88666-8wwqq" Sep 6 00:04:45.102549 env[1848]: time="2025-09-06T00:04:45.102018084Z" level=info msg="shim disconnected" id=6efc486760d27b8732390592565ab0d734181fa6d1e9c04f517bca7b4fb8a1e3 Sep 6 00:04:45.102549 env[1848]: time="2025-09-06T00:04:45.102101002Z" level=warning msg="cleaning up after shim disconnected" id=6efc486760d27b8732390592565ab0d734181fa6d1e9c04f517bca7b4fb8a1e3 namespace=k8s.io Sep 6 00:04:45.102549 env[1848]: time="2025-09-06T00:04:45.102122412Z" level=info msg="cleaning up dead shim" Sep 6 00:04:45.152257 env[1848]: time="2025-09-06T00:04:45.151657647Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:04:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3703 runtime=io.containerd.runc.v2\n" Sep 6 00:04:45.245240 env[1848]: time="2025-09-06T00:04:45.245152526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zpqph,Uid:8f59dc9d-44f6-4633-8546-11f2219b7da2,Namespace:calico-system,Attempt:0,}" Sep 6 00:04:45.253636 env[1848]: time="2025-09-06T00:04:45.253560538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dz8br,Uid:4dca1103-4d65-4dd5-9f1d-b159dc97b5c8,Namespace:kube-system,Attempt:0,}" Sep 6 00:04:45.278109 env[1848]: time="2025-09-06T00:04:45.278026434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b6bdf98bc-kchk5,Uid:482c659b-c6e7-44f9-a152-a4d1a37c30db,Namespace:calico-system,Attempt:0,}" Sep 6 00:04:45.278553 env[1848]: time="2025-09-06T00:04:45.278490517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-8wwqq,Uid:46f8567e-23c4-451e-82e1-50ad6fac0a71,Namespace:calico-system,Attempt:0,}" Sep 6 00:04:45.283496 env[1848]: time="2025-09-06T00:04:45.283302516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-764999b789-c2t8l,Uid:92c99ea2-eb5a-48a9-9626-85e265ce8b17,Namespace:calico-system,Attempt:0,}" Sep 6 00:04:45.462901 env[1848]: time="2025-09-06T00:04:45.462843440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 6 00:04:45.490374 env[1848]: time="2025-09-06T00:04:45.490310462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mt5pq,Uid:695d8e71-3eef-4ad5-94de-576fc3ef9397,Namespace:kube-system,Attempt:0,}" Sep 6 00:04:45.669683 env[1848]: time="2025-09-06T00:04:45.669510293Z" level=error msg="Failed to destroy network for sandbox \"7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:45.670377 env[1848]: time="2025-09-06T00:04:45.670289138Z" level=error msg="encountered an error cleaning up failed sandbox \"7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:45.670642 env[1848]: time="2025-09-06T00:04:45.670386109Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-8wwqq,Uid:46f8567e-23c4-451e-82e1-50ad6fac0a71,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:45.672995 kubelet[2926]: E0906 00:04:45.671123 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:45.672995 kubelet[2926]: E0906 00:04:45.671323 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-8wwqq" Sep 6 00:04:45.672995 kubelet[2926]: E0906 00:04:45.671381 2926 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-8wwqq" Sep 6 00:04:45.674318 kubelet[2926]: E0906 00:04:45.671488 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-8wwqq_calico-system(46f8567e-23c4-451e-82e1-50ad6fac0a71)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-8wwqq_calico-system(46f8567e-23c4-451e-82e1-50ad6fac0a71)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-8wwqq" podUID="46f8567e-23c4-451e-82e1-50ad6fac0a71" Sep 6 00:04:45.715825 env[1848]: time="2025-09-06T00:04:45.714928951Z" level=error msg="Failed to destroy network for sandbox \"52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:45.715825 env[1848]: time="2025-09-06T00:04:45.715594587Z" level=error msg="encountered an error cleaning up failed sandbox \"52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:45.715825 env[1848]: time="2025-09-06T00:04:45.715677373Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dz8br,Uid:4dca1103-4d65-4dd5-9f1d-b159dc97b5c8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:45.716791 kubelet[2926]: E0906 00:04:45.716429 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:45.716791 kubelet[2926]: E0906 00:04:45.716540 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dz8br" Sep 6 00:04:45.716791 kubelet[2926]: E0906 00:04:45.716598 2926 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dz8br" Sep 6 00:04:45.717711 kubelet[2926]: E0906 00:04:45.716700 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dz8br_kube-system(4dca1103-4d65-4dd5-9f1d-b159dc97b5c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dz8br_kube-system(4dca1103-4d65-4dd5-9f1d-b159dc97b5c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dz8br" podUID="4dca1103-4d65-4dd5-9f1d-b159dc97b5c8" Sep 6 00:04:45.730740 env[1848]: time="2025-09-06T00:04:45.730665375Z" level=error msg="Failed to destroy network for sandbox \"84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:45.732104 env[1848]: time="2025-09-06T00:04:45.732030551Z" level=error msg="encountered an error cleaning up failed sandbox \"84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:45.732417 env[1848]: time="2025-09-06T00:04:45.732362990Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-764999b789-c2t8l,Uid:92c99ea2-eb5a-48a9-9626-85e265ce8b17,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:45.736881 kubelet[2926]: E0906 00:04:45.732882 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:45.736881 kubelet[2926]: E0906 00:04:45.733012 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-764999b789-c2t8l" Sep 6 00:04:45.736881 kubelet[2926]: E0906 00:04:45.733050 2926 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-764999b789-c2t8l" Sep 6 00:04:45.737291 kubelet[2926]: E0906 00:04:45.733139 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-764999b789-c2t8l_calico-system(92c99ea2-eb5a-48a9-9626-85e265ce8b17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-764999b789-c2t8l_calico-system(92c99ea2-eb5a-48a9-9626-85e265ce8b17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-764999b789-c2t8l" podUID="92c99ea2-eb5a-48a9-9626-85e265ce8b17" Sep 6 00:04:45.737671 env[1848]: time="2025-09-06T00:04:45.737597164Z" level=error msg="Failed to destroy network for sandbox \"e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:45.738513 env[1848]: time="2025-09-06T00:04:45.738442365Z" level=error msg="encountered an error cleaning up failed sandbox \"e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:45.738767 env[1848]: time="2025-09-06T00:04:45.738715829Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b6bdf98bc-kchk5,Uid:482c659b-c6e7-44f9-a152-a4d1a37c30db,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:45.739597 kubelet[2926]: E0906 00:04:45.739234 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:45.739597 kubelet[2926]: E0906 00:04:45.739332 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b6bdf98bc-kchk5" Sep 6 00:04:45.739597 kubelet[2926]: E0906 00:04:45.739387 2926 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b6bdf98bc-kchk5" Sep 6 00:04:45.741112 kubelet[2926]: E0906 00:04:45.739478 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-b6bdf98bc-kchk5_calico-system(482c659b-c6e7-44f9-a152-a4d1a37c30db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-b6bdf98bc-kchk5_calico-system(482c659b-c6e7-44f9-a152-a4d1a37c30db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b6bdf98bc-kchk5" podUID="482c659b-c6e7-44f9-a152-a4d1a37c30db" Sep 6 00:04:45.756911 env[1848]: time="2025-09-06T00:04:45.756827997Z" level=error msg="Failed to destroy network for sandbox \"ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:45.757546 env[1848]: time="2025-09-06T00:04:45.757478703Z" level=error msg="encountered an error cleaning up failed sandbox \"ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:45.757673 env[1848]: time="2025-09-06T00:04:45.757564825Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zpqph,Uid:8f59dc9d-44f6-4633-8546-11f2219b7da2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:45.760041 kubelet[2926]: E0906 00:04:45.757950 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:45.760041 kubelet[2926]: E0906 00:04:45.758066 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zpqph" Sep 6 00:04:45.760041 kubelet[2926]: E0906 00:04:45.758123 2926 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zpqph" Sep 6 00:04:45.760398 kubelet[2926]: E0906 00:04:45.758287 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zpqph_calico-system(8f59dc9d-44f6-4633-8546-11f2219b7da2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zpqph_calico-system(8f59dc9d-44f6-4633-8546-11f2219b7da2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zpqph" podUID="8f59dc9d-44f6-4633-8546-11f2219b7da2" Sep 6 00:04:45.803280 env[1848]: time="2025-09-06T00:04:45.803166410Z" level=error msg="Failed to destroy network for sandbox \"f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:45.807861 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827-shm.mount: Deactivated successfully. Sep 6 00:04:45.809110 env[1848]: time="2025-09-06T00:04:45.809039288Z" level=error msg="encountered an error cleaning up failed sandbox \"f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:45.809507 env[1848]: time="2025-09-06T00:04:45.809435575Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mt5pq,Uid:695d8e71-3eef-4ad5-94de-576fc3ef9397,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:45.810457 kubelet[2926]: E0906 00:04:45.809911 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:45.810457 kubelet[2926]: E0906 00:04:45.809990 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mt5pq" Sep 6 00:04:45.810457 kubelet[2926]: E0906 00:04:45.810023 2926 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mt5pq" Sep 6 00:04:45.811157 kubelet[2926]: E0906 00:04:45.810087 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-mt5pq_kube-system(695d8e71-3eef-4ad5-94de-576fc3ef9397)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-mt5pq_kube-system(695d8e71-3eef-4ad5-94de-576fc3ef9397)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mt5pq" podUID="695d8e71-3eef-4ad5-94de-576fc3ef9397" Sep 6 00:04:46.070266 kubelet[2926]: E0906 00:04:46.070109 2926 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Sep 6 00:04:46.070541 kubelet[2926]: E0906 00:04:46.070117 2926 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Sep 6 00:04:46.071016 kubelet[2926]: E0906 00:04:46.070714 2926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c9fcc46-cf36-4206-bdd4-bb1b1b8568f2-calico-apiserver-certs podName:1c9fcc46-cf36-4206-bdd4-bb1b1b8568f2 nodeName:}" failed. No retries permitted until 2025-09-06 00:04:46.57067993 +0000 UTC m=+42.708155687 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/1c9fcc46-cf36-4206-bdd4-bb1b1b8568f2-calico-apiserver-certs") pod "calico-apiserver-797f9c6c85-hw6d9" (UID: "1c9fcc46-cf36-4206-bdd4-bb1b1b8568f2") : failed to sync secret cache: timed out waiting for the condition Sep 6 00:04:46.071310 kubelet[2926]: E0906 00:04:46.071273 2926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3def65b-39c6-44ff-8623-e889bb6d6c02-calico-apiserver-certs podName:d3def65b-39c6-44ff-8623-e889bb6d6c02 nodeName:}" failed. No retries permitted until 2025-09-06 00:04:46.571249014 +0000 UTC m=+42.708724782 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/d3def65b-39c6-44ff-8623-e889bb6d6c02-calico-apiserver-certs") pod "calico-apiserver-797f9c6c85-zgcsm" (UID: "d3def65b-39c6-44ff-8623-e889bb6d6c02") : failed to sync secret cache: timed out waiting for the condition Sep 6 00:04:46.444404 kubelet[2926]: I0906 00:04:46.444345 2926 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" Sep 6 00:04:46.450256 env[1848]: time="2025-09-06T00:04:46.450126583Z" level=info msg="StopPodSandbox for \"ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b\"" Sep 6 00:04:46.453234 kubelet[2926]: I0906 00:04:46.452969 2926 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" Sep 6 00:04:46.456531 env[1848]: time="2025-09-06T00:04:46.454427568Z" level=info msg="StopPodSandbox for \"84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb\"" Sep 6 00:04:46.459668 kubelet[2926]: I0906 00:04:46.459613 2926 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" Sep 6 00:04:46.462330 env[1848]: time="2025-09-06T00:04:46.460808590Z" level=info msg="StopPodSandbox for \"f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827\"" Sep 6 00:04:46.466668 kubelet[2926]: I0906 00:04:46.465968 2926 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" Sep 6 00:04:46.470361 kubelet[2926]: I0906 00:04:46.470310 2926 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" Sep 6 00:04:46.471111 env[1848]: time="2025-09-06T00:04:46.471045584Z" level=info msg="StopPodSandbox for \"e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d\"" Sep 6 00:04:46.472226 env[1848]: time="2025-09-06T00:04:46.472155560Z" level=info msg="StopPodSandbox for \"7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b\"" Sep 6 00:04:46.478792 kubelet[2926]: I0906 00:04:46.477844 2926 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" Sep 6 00:04:46.479062 env[1848]: time="2025-09-06T00:04:46.478999981Z" level=info msg="StopPodSandbox for \"52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d\"" Sep 6 00:04:46.588162 env[1848]: time="2025-09-06T00:04:46.588061351Z" level=error msg="StopPodSandbox for \"f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827\" failed" error="failed to destroy network for sandbox \"f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:46.589069 kubelet[2926]: E0906 00:04:46.588644 2926 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" Sep 6 00:04:46.589069 kubelet[2926]: E0906 00:04:46.588781 2926 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827"} Sep 6 00:04:46.589069 kubelet[2926]: E0906 00:04:46.588901 2926 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"695d8e71-3eef-4ad5-94de-576fc3ef9397\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:04:46.589069 kubelet[2926]: E0906 00:04:46.588991 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"695d8e71-3eef-4ad5-94de-576fc3ef9397\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mt5pq" podUID="695d8e71-3eef-4ad5-94de-576fc3ef9397" Sep 6 00:04:46.590092 env[1848]: time="2025-09-06T00:04:46.589953734Z" level=error msg="StopPodSandbox for \"84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb\" failed" error="failed to destroy network for sandbox \"84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:46.590930 kubelet[2926]: E0906 00:04:46.590601 2926 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" Sep 6 00:04:46.590930 kubelet[2926]: E0906 00:04:46.590704 2926 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb"} Sep 6 00:04:46.590930 kubelet[2926]: E0906 00:04:46.590784 2926 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"92c99ea2-eb5a-48a9-9626-85e265ce8b17\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:04:46.590930 kubelet[2926]: E0906 00:04:46.590858 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"92c99ea2-eb5a-48a9-9626-85e265ce8b17\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-764999b789-c2t8l" podUID="92c99ea2-eb5a-48a9-9626-85e265ce8b17" Sep 6 00:04:46.665594 env[1848]: time="2025-09-06T00:04:46.665488997Z" level=error msg="StopPodSandbox for \"ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b\" failed" error="failed to destroy network for sandbox \"ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:46.666214 kubelet[2926]: E0906 00:04:46.665959 2926 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" Sep 6 00:04:46.666214 kubelet[2926]: E0906 00:04:46.666027 2926 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b"} Sep 6 00:04:46.666214 kubelet[2926]: E0906 00:04:46.666082 2926 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8f59dc9d-44f6-4633-8546-11f2219b7da2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:04:46.666214 kubelet[2926]: E0906 00:04:46.666124 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8f59dc9d-44f6-4633-8546-11f2219b7da2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zpqph" podUID="8f59dc9d-44f6-4633-8546-11f2219b7da2" Sep 6 00:04:46.683088 env[1848]: time="2025-09-06T00:04:46.682990762Z" level=error msg="StopPodSandbox for \"e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d\" failed" error="failed to destroy network for sandbox \"e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:46.683726 kubelet[2926]: E0906 00:04:46.683480 2926 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" Sep 6 00:04:46.683726 kubelet[2926]: E0906 00:04:46.683548 2926 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d"} Sep 6 00:04:46.683726 kubelet[2926]: E0906 00:04:46.683604 2926 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"482c659b-c6e7-44f9-a152-a4d1a37c30db\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:04:46.683726 kubelet[2926]: E0906 00:04:46.683643 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"482c659b-c6e7-44f9-a152-a4d1a37c30db\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b6bdf98bc-kchk5" podUID="482c659b-c6e7-44f9-a152-a4d1a37c30db" Sep 6 00:04:46.687030 env[1848]: time="2025-09-06T00:04:46.686924143Z" level=error msg="StopPodSandbox for \"52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d\" failed" error="failed to destroy network for sandbox \"52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:46.687644 kubelet[2926]: E0906 00:04:46.687401 2926 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" Sep 6 00:04:46.687644 kubelet[2926]: E0906 00:04:46.687468 2926 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d"} Sep 6 00:04:46.687644 kubelet[2926]: E0906 00:04:46.687530 2926 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4dca1103-4d65-4dd5-9f1d-b159dc97b5c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:04:46.687644 kubelet[2926]: E0906 00:04:46.687570 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4dca1103-4d65-4dd5-9f1d-b159dc97b5c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dz8br" podUID="4dca1103-4d65-4dd5-9f1d-b159dc97b5c8" Sep 6 00:04:46.688429 env[1848]: time="2025-09-06T00:04:46.688330469Z" level=error msg="StopPodSandbox for \"7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b\" failed" error="failed to destroy network for sandbox \"7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:46.689021 kubelet[2926]: E0906 00:04:46.688761 2926 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" Sep 6 00:04:46.689021 kubelet[2926]: E0906 00:04:46.688847 2926 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b"} Sep 6 00:04:46.689021 kubelet[2926]: E0906 00:04:46.688901 2926 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"46f8567e-23c4-451e-82e1-50ad6fac0a71\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:04:46.689021 kubelet[2926]: E0906 00:04:46.688943 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"46f8567e-23c4-451e-82e1-50ad6fac0a71\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-8wwqq" podUID="46f8567e-23c4-451e-82e1-50ad6fac0a71" Sep 6 00:04:46.733588 env[1848]: time="2025-09-06T00:04:46.730641975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-797f9c6c85-zgcsm,Uid:d3def65b-39c6-44ff-8623-e889bb6d6c02,Namespace:calico-apiserver,Attempt:0,}" Sep 6 00:04:46.777656 env[1848]: time="2025-09-06T00:04:46.777584601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-797f9c6c85-hw6d9,Uid:1c9fcc46-cf36-4206-bdd4-bb1b1b8568f2,Namespace:calico-apiserver,Attempt:0,}" Sep 6 00:04:46.973574 env[1848]: time="2025-09-06T00:04:46.973492067Z" level=error msg="Failed to destroy network for sandbox \"28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:46.978521 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120-shm.mount: Deactivated successfully. Sep 6 00:04:46.980947 env[1848]: time="2025-09-06T00:04:46.980875964Z" level=error msg="encountered an error cleaning up failed sandbox \"28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:46.981254 env[1848]: time="2025-09-06T00:04:46.981174595Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-797f9c6c85-zgcsm,Uid:d3def65b-39c6-44ff-8623-e889bb6d6c02,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:46.985041 kubelet[2926]: E0906 00:04:46.981751 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:46.985041 kubelet[2926]: E0906 00:04:46.981890 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-797f9c6c85-zgcsm" Sep 6 00:04:46.985041 kubelet[2926]: E0906 00:04:46.981924 2926 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-797f9c6c85-zgcsm" Sep 6 00:04:46.985794 kubelet[2926]: E0906 00:04:46.982417 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-797f9c6c85-zgcsm_calico-apiserver(d3def65b-39c6-44ff-8623-e889bb6d6c02)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-797f9c6c85-zgcsm_calico-apiserver(d3def65b-39c6-44ff-8623-e889bb6d6c02)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-797f9c6c85-zgcsm" podUID="d3def65b-39c6-44ff-8623-e889bb6d6c02" Sep 6 00:04:47.021166 env[1848]: time="2025-09-06T00:04:47.021091939Z" level=error msg="Failed to destroy network for sandbox \"809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:47.026303 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975-shm.mount: Deactivated successfully. Sep 6 00:04:47.029703 env[1848]: time="2025-09-06T00:04:47.029621889Z" level=error msg="encountered an error cleaning up failed sandbox \"809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:47.029984 env[1848]: time="2025-09-06T00:04:47.029908195Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-797f9c6c85-hw6d9,Uid:1c9fcc46-cf36-4206-bdd4-bb1b1b8568f2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:47.031984 kubelet[2926]: E0906 00:04:47.030478 2926 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:47.031984 kubelet[2926]: E0906 00:04:47.030561 2926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-797f9c6c85-hw6d9" Sep 6 00:04:47.031984 kubelet[2926]: E0906 00:04:47.030595 2926 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-797f9c6c85-hw6d9" Sep 6 00:04:47.032404 kubelet[2926]: E0906 00:04:47.030660 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-797f9c6c85-hw6d9_calico-apiserver(1c9fcc46-cf36-4206-bdd4-bb1b1b8568f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-797f9c6c85-hw6d9_calico-apiserver(1c9fcc46-cf36-4206-bdd4-bb1b1b8568f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-797f9c6c85-hw6d9" podUID="1c9fcc46-cf36-4206-bdd4-bb1b1b8568f2" Sep 6 00:04:47.485590 kubelet[2926]: I0906 00:04:47.485536 2926 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" Sep 6 00:04:47.489587 env[1848]: time="2025-09-06T00:04:47.486837210Z" level=info msg="StopPodSandbox for \"28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120\"" Sep 6 00:04:47.492418 kubelet[2926]: I0906 00:04:47.492353 2926 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" Sep 6 00:04:47.494150 env[1848]: time="2025-09-06T00:04:47.494078822Z" level=info msg="StopPodSandbox for \"809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975\"" Sep 6 00:04:47.578008 env[1848]: time="2025-09-06T00:04:47.577924860Z" level=error msg="StopPodSandbox for \"809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975\" failed" error="failed to destroy network for sandbox \"809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:47.578880 kubelet[2926]: E0906 00:04:47.578545 2926 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" Sep 6 00:04:47.578880 kubelet[2926]: E0906 00:04:47.578632 2926 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975"} Sep 6 00:04:47.578880 kubelet[2926]: E0906 00:04:47.578731 2926 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1c9fcc46-cf36-4206-bdd4-bb1b1b8568f2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:04:47.578880 kubelet[2926]: E0906 00:04:47.578796 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1c9fcc46-cf36-4206-bdd4-bb1b1b8568f2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-797f9c6c85-hw6d9" podUID="1c9fcc46-cf36-4206-bdd4-bb1b1b8568f2" Sep 6 00:04:47.590862 env[1848]: time="2025-09-06T00:04:47.590781941Z" level=error msg="StopPodSandbox for \"28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120\" failed" error="failed to destroy network for sandbox \"28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 00:04:47.591742 kubelet[2926]: E0906 00:04:47.591389 2926 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" Sep 6 00:04:47.591742 kubelet[2926]: E0906 00:04:47.591544 2926 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120"} Sep 6 00:04:47.591742 kubelet[2926]: E0906 00:04:47.591625 2926 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d3def65b-39c6-44ff-8623-e889bb6d6c02\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 00:04:47.591742 kubelet[2926]: E0906 00:04:47.591667 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d3def65b-39c6-44ff-8623-e889bb6d6c02\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-797f9c6c85-zgcsm" podUID="d3def65b-39c6-44ff-8623-e889bb6d6c02" Sep 6 00:04:55.284521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3120218772.mount: Deactivated successfully. Sep 6 00:04:55.342003 env[1848]: time="2025-09-06T00:04:55.341939671Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:04:55.344327 env[1848]: time="2025-09-06T00:04:55.344278001Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:04:55.346556 env[1848]: time="2025-09-06T00:04:55.346491494Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:04:55.348829 env[1848]: time="2025-09-06T00:04:55.348778421Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:04:55.349936 env[1848]: time="2025-09-06T00:04:55.349891102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 6 00:04:55.390040 env[1848]: time="2025-09-06T00:04:55.389933626Z" level=info msg="CreateContainer within sandbox \"0629f4156aa8104fffd04d639036b780f9eb814deea9c56f707e10d19ffc8e57\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 6 00:04:55.415330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3401983663.mount: Deactivated successfully. Sep 6 00:04:55.417880 env[1848]: time="2025-09-06T00:04:55.417570423Z" level=info msg="CreateContainer within sandbox \"0629f4156aa8104fffd04d639036b780f9eb814deea9c56f707e10d19ffc8e57\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5349da681c3d6e91eb0f99d9ea65366fd853387fa9a924ad76cbb13b820900a4\"" Sep 6 00:04:55.421717 env[1848]: time="2025-09-06T00:04:55.421597728Z" level=info msg="StartContainer for \"5349da681c3d6e91eb0f99d9ea65366fd853387fa9a924ad76cbb13b820900a4\"" Sep 6 00:04:55.624838 env[1848]: time="2025-09-06T00:04:55.624532741Z" level=info msg="StartContainer for \"5349da681c3d6e91eb0f99d9ea65366fd853387fa9a924ad76cbb13b820900a4\" returns successfully" Sep 6 00:04:55.918275 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 6 00:04:55.918488 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 6 00:04:56.144020 env[1848]: time="2025-09-06T00:04:56.143962309Z" level=info msg="StopPodSandbox for \"e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d\"" Sep 6 00:04:56.575296 kubelet[2926]: I0906 00:04:56.567356 2926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fgtb5" podStartSLOduration=2.435748298 podStartE2EDuration="25.567331262s" podCreationTimestamp="2025-09-06 00:04:31 +0000 UTC" firstStartedPulling="2025-09-06 00:04:32.220778695 +0000 UTC m=+28.358254451" lastFinishedPulling="2025-09-06 00:04:55.352361659 +0000 UTC m=+51.489837415" observedRunningTime="2025-09-06 00:04:56.566899108 +0000 UTC m=+52.704374876" watchObservedRunningTime="2025-09-06 00:04:56.567331262 +0000 UTC m=+52.704807042" Sep 6 00:04:56.608416 env[1848]: 2025-09-06 00:04:56.382 [INFO][4128] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" Sep 6 00:04:56.608416 env[1848]: 2025-09-06 00:04:56.382 [INFO][4128] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" iface="eth0" netns="/var/run/netns/cni-e053ba63-2e42-03ed-5685-8bf8eab6aef3" Sep 6 00:04:56.608416 env[1848]: 2025-09-06 00:04:56.382 [INFO][4128] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" iface="eth0" netns="/var/run/netns/cni-e053ba63-2e42-03ed-5685-8bf8eab6aef3" Sep 6 00:04:56.608416 env[1848]: 2025-09-06 00:04:56.386 [INFO][4128] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" iface="eth0" netns="/var/run/netns/cni-e053ba63-2e42-03ed-5685-8bf8eab6aef3" Sep 6 00:04:56.608416 env[1848]: 2025-09-06 00:04:56.386 [INFO][4128] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" Sep 6 00:04:56.608416 env[1848]: 2025-09-06 00:04:56.386 [INFO][4128] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" Sep 6 00:04:56.608416 env[1848]: 2025-09-06 00:04:56.556 [INFO][4137] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" HandleID="k8s-pod-network.e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" Workload="ip--172--31--24--61-k8s-whisker--b6bdf98bc--kchk5-eth0" Sep 6 00:04:56.608416 env[1848]: 2025-09-06 00:04:56.557 [INFO][4137] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:04:56.608416 env[1848]: 2025-09-06 00:04:56.557 [INFO][4137] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:04:56.608416 env[1848]: 2025-09-06 00:04:56.584 [WARNING][4137] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" HandleID="k8s-pod-network.e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" Workload="ip--172--31--24--61-k8s-whisker--b6bdf98bc--kchk5-eth0" Sep 6 00:04:56.608416 env[1848]: 2025-09-06 00:04:56.585 [INFO][4137] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" HandleID="k8s-pod-network.e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" Workload="ip--172--31--24--61-k8s-whisker--b6bdf98bc--kchk5-eth0" Sep 6 00:04:56.608416 env[1848]: 2025-09-06 00:04:56.591 [INFO][4137] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:04:56.608416 env[1848]: 2025-09-06 00:04:56.601 [INFO][4128] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" Sep 6 00:04:56.615814 systemd[1]: run-netns-cni\x2de053ba63\x2d2e42\x2d03ed\x2d5685\x2d8bf8eab6aef3.mount: Deactivated successfully. Sep 6 00:04:56.625015 env[1848]: time="2025-09-06T00:04:56.624942303Z" level=info msg="TearDown network for sandbox \"e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d\" successfully" Sep 6 00:04:56.626439 env[1848]: time="2025-09-06T00:04:56.626365725Z" level=info msg="StopPodSandbox for \"e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d\" returns successfully" Sep 6 00:04:56.690262 systemd[1]: run-containerd-runc-k8s.io-5349da681c3d6e91eb0f99d9ea65366fd853387fa9a924ad76cbb13b820900a4-runc.up9fDl.mount: Deactivated successfully. Sep 6 00:04:56.798043 kubelet[2926]: I0906 00:04:56.795677 2926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rh8tg\" (UniqueName: \"kubernetes.io/projected/482c659b-c6e7-44f9-a152-a4d1a37c30db-kube-api-access-rh8tg\") pod \"482c659b-c6e7-44f9-a152-a4d1a37c30db\" (UID: \"482c659b-c6e7-44f9-a152-a4d1a37c30db\") " Sep 6 00:04:56.798043 kubelet[2926]: I0906 00:04:56.795770 2926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/482c659b-c6e7-44f9-a152-a4d1a37c30db-whisker-backend-key-pair\") pod \"482c659b-c6e7-44f9-a152-a4d1a37c30db\" (UID: \"482c659b-c6e7-44f9-a152-a4d1a37c30db\") " Sep 6 00:04:56.798043 kubelet[2926]: I0906 00:04:56.795817 2926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/482c659b-c6e7-44f9-a152-a4d1a37c30db-whisker-ca-bundle\") pod \"482c659b-c6e7-44f9-a152-a4d1a37c30db\" (UID: \"482c659b-c6e7-44f9-a152-a4d1a37c30db\") " Sep 6 00:04:56.798043 kubelet[2926]: I0906 00:04:56.796707 2926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/482c659b-c6e7-44f9-a152-a4d1a37c30db-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "482c659b-c6e7-44f9-a152-a4d1a37c30db" (UID: "482c659b-c6e7-44f9-a152-a4d1a37c30db"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:04:56.823279 kubelet[2926]: I0906 00:04:56.823152 2926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/482c659b-c6e7-44f9-a152-a4d1a37c30db-kube-api-access-rh8tg" (OuterVolumeSpecName: "kube-api-access-rh8tg") pod "482c659b-c6e7-44f9-a152-a4d1a37c30db" (UID: "482c659b-c6e7-44f9-a152-a4d1a37c30db"). InnerVolumeSpecName "kube-api-access-rh8tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:04:56.825612 kubelet[2926]: I0906 00:04:56.825485 2926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/482c659b-c6e7-44f9-a152-a4d1a37c30db-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "482c659b-c6e7-44f9-a152-a4d1a37c30db" (UID: "482c659b-c6e7-44f9-a152-a4d1a37c30db"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:04:56.826572 systemd[1]: var-lib-kubelet-pods-482c659b\x2dc6e7\x2d44f9\x2da152\x2da4d1a37c30db-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drh8tg.mount: Deactivated successfully. Sep 6 00:04:56.826980 systemd[1]: var-lib-kubelet-pods-482c659b\x2dc6e7\x2d44f9\x2da152\x2da4d1a37c30db-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 6 00:04:56.897167 kubelet[2926]: I0906 00:04:56.897109 2926 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/482c659b-c6e7-44f9-a152-a4d1a37c30db-whisker-backend-key-pair\") on node \"ip-172-31-24-61\" DevicePath \"\"" Sep 6 00:04:56.897507 kubelet[2926]: I0906 00:04:56.897464 2926 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/482c659b-c6e7-44f9-a152-a4d1a37c30db-whisker-ca-bundle\") on node \"ip-172-31-24-61\" DevicePath \"\"" Sep 6 00:04:56.897670 kubelet[2926]: I0906 00:04:56.897646 2926 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rh8tg\" (UniqueName: \"kubernetes.io/projected/482c659b-c6e7-44f9-a152-a4d1a37c30db-kube-api-access-rh8tg\") on node \"ip-172-31-24-61\" DevicePath \"\"" Sep 6 00:04:57.808454 kubelet[2926]: I0906 00:04:57.808361 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/23251729-d599-42e3-8acf-04f97e1f2425-whisker-backend-key-pair\") pod \"whisker-b986b57c8-g24l8\" (UID: \"23251729-d599-42e3-8acf-04f97e1f2425\") " pod="calico-system/whisker-b986b57c8-g24l8" Sep 6 00:04:57.809119 kubelet[2926]: I0906 00:04:57.808500 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23251729-d599-42e3-8acf-04f97e1f2425-whisker-ca-bundle\") pod \"whisker-b986b57c8-g24l8\" (UID: \"23251729-d599-42e3-8acf-04f97e1f2425\") " pod="calico-system/whisker-b986b57c8-g24l8" Sep 6 00:04:57.809119 kubelet[2926]: I0906 00:04:57.808613 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zkz2\" (UniqueName: \"kubernetes.io/projected/23251729-d599-42e3-8acf-04f97e1f2425-kube-api-access-2zkz2\") pod \"whisker-b986b57c8-g24l8\" (UID: \"23251729-d599-42e3-8acf-04f97e1f2425\") " pod="calico-system/whisker-b986b57c8-g24l8" Sep 6 00:04:57.990433 env[1848]: time="2025-09-06T00:04:57.989899448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b986b57c8-g24l8,Uid:23251729-d599-42e3-8acf-04f97e1f2425,Namespace:calico-system,Attempt:0,}" Sep 6 00:04:58.000000 audit[4253]: AVC avc: denied { write } for pid=4253 comm="tee" name="fd" dev="proc" ino=21655 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:04:58.000000 audit[4253]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe44757c6 a2=241 a3=1b6 items=1 ppid=4207 pid=4253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:58.026989 kernel: audit: type=1400 audit(1757117098.000:308): avc: denied { write } for pid=4253 comm="tee" name="fd" dev="proc" ino=21655 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:04:58.027135 kernel: audit: type=1300 audit(1757117098.000:308): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe44757c6 a2=241 a3=1b6 items=1 ppid=4207 pid=4253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:58.000000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Sep 6 00:04:58.035103 kernel: audit: type=1307 audit(1757117098.000:308): cwd="/etc/service/enabled/node-status-reporter/log" Sep 6 00:04:58.000000 audit: PATH item=0 name="/dev/fd/63" inode=21652 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:04:58.043730 kernel: audit: type=1302 audit(1757117098.000:308): item=0 name="/dev/fd/63" inode=21652 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:04:58.000000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:04:58.051950 kernel: audit: type=1327 audit(1757117098.000:308): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:04:58.081243 kernel: audit: type=1400 audit(1757117098.058:309): avc: denied { write } for pid=4257 comm="tee" name="fd" dev="proc" ino=21669 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:04:58.058000 audit[4257]: AVC avc: denied { write } for pid=4257 comm="tee" name="fd" dev="proc" ino=21669 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:04:58.109410 kernel: audit: type=1300 audit(1757117098.058:309): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcdcef7d5 a2=241 a3=1b6 items=1 ppid=4213 pid=4257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:58.058000 audit[4257]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcdcef7d5 a2=241 a3=1b6 items=1 ppid=4213 pid=4257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:58.058000 audit: CWD cwd="/etc/service/enabled/confd/log" Sep 6 00:04:58.126878 kernel: audit: type=1307 audit(1757117098.058:309): cwd="/etc/service/enabled/confd/log" Sep 6 00:04:58.127019 kernel: audit: type=1302 audit(1757117098.058:309): item=0 name="/dev/fd/63" inode=22668 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:04:58.058000 audit: PATH item=0 name="/dev/fd/63" inode=22668 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:04:58.134884 kernel: audit: type=1327 audit(1757117098.058:309): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:04:58.058000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:04:58.110000 audit[4269]: AVC avc: denied { write } for pid=4269 comm="tee" name="fd" dev="proc" ino=21686 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:04:58.110000 audit[4269]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe26f67d5 a2=241 a3=1b6 items=1 ppid=4205 pid=4269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:58.110000 audit: CWD cwd="/etc/service/enabled/bird6/log" Sep 6 00:04:58.110000 audit: PATH item=0 name="/dev/fd/63" inode=21681 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:04:58.110000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:04:58.196000 audit[4277]: AVC avc: denied { write } for pid=4277 comm="tee" name="fd" dev="proc" ino=21697 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:04:58.196000 audit[4277]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc4d6b7d6 a2=241 a3=1b6 items=1 ppid=4219 pid=4277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:58.196000 audit: CWD cwd="/etc/service/enabled/bird/log" Sep 6 00:04:58.196000 audit: PATH item=0 name="/dev/fd/63" inode=21688 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:04:58.196000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:04:58.214000 audit[4279]: AVC avc: denied { write } for pid=4279 comm="tee" name="fd" dev="proc" ino=21703 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:04:58.214000 audit[4279]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe8a7c7d5 a2=241 a3=1b6 items=1 ppid=4209 pid=4279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:58.214000 audit: CWD cwd="/etc/service/enabled/felix/log" Sep 6 00:04:58.214000 audit: PATH item=0 name="/dev/fd/63" inode=21689 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:04:58.214000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:04:58.241868 env[1848]: time="2025-09-06T00:04:58.241788463Z" level=info msg="StopPodSandbox for \"7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b\"" Sep 6 00:04:58.257000 audit[4281]: AVC avc: denied { write } for pid=4281 comm="tee" name="fd" dev="proc" ino=21708 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:04:58.257000 audit[4281]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe411a7d7 a2=241 a3=1b6 items=1 ppid=4217 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:58.257000 audit: CWD cwd="/etc/service/enabled/cni/log" Sep 6 00:04:58.257000 audit: PATH item=0 name="/dev/fd/63" inode=21692 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:04:58.257000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:04:58.259991 kubelet[2926]: I0906 00:04:58.251046 2926 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="482c659b-c6e7-44f9-a152-a4d1a37c30db" path="/var/lib/kubelet/pods/482c659b-c6e7-44f9-a152-a4d1a37c30db/volumes" Sep 6 00:04:58.288000 audit[4296]: AVC avc: denied { write } for pid=4296 comm="tee" name="fd" dev="proc" ino=22728 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 00:04:58.288000 audit[4296]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc8ff57c5 a2=241 a3=1b6 items=1 ppid=4210 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:58.288000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Sep 6 00:04:58.288000 audit: PATH item=0 name="/dev/fd/63" inode=21704 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:04:58.288000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 00:04:58.742500 (udev-worker)[4104]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:04:58.767292 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:04:58.767468 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali279e987bca3: link becomes ready Sep 6 00:04:58.774717 systemd-networkd[1511]: cali279e987bca3: Link UP Sep 6 00:04:58.784597 systemd[1]: run-containerd-runc-k8s.io-5349da681c3d6e91eb0f99d9ea65366fd853387fa9a924ad76cbb13b820900a4-runc.T25hoh.mount: Deactivated successfully. Sep 6 00:04:58.790780 systemd-networkd[1511]: cali279e987bca3: Gained carrier Sep 6 00:04:58.862813 env[1848]: 2025-09-06 00:04:58.191 [INFO][4258] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 6 00:04:58.862813 env[1848]: 2025-09-06 00:04:58.237 [INFO][4258] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--61-k8s-whisker--b986b57c8--g24l8-eth0 whisker-b986b57c8- calico-system 23251729-d599-42e3-8acf-04f97e1f2425 912 0 2025-09-06 00:04:57 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:b986b57c8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-24-61 whisker-b986b57c8-g24l8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali279e987bca3 [] [] }} ContainerID="c63311d4d5b771442da034ffe7043a2e8151ecb16ce4e44c545dd42066480e2b" Namespace="calico-system" Pod="whisker-b986b57c8-g24l8" WorkloadEndpoint="ip--172--31--24--61-k8s-whisker--b986b57c8--g24l8-" Sep 6 00:04:58.862813 env[1848]: 2025-09-06 00:04:58.237 [INFO][4258] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c63311d4d5b771442da034ffe7043a2e8151ecb16ce4e44c545dd42066480e2b" Namespace="calico-system" Pod="whisker-b986b57c8-g24l8" WorkloadEndpoint="ip--172--31--24--61-k8s-whisker--b986b57c8--g24l8-eth0" Sep 6 00:04:58.862813 env[1848]: 2025-09-06 00:04:58.527 [INFO][4314] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c63311d4d5b771442da034ffe7043a2e8151ecb16ce4e44c545dd42066480e2b" HandleID="k8s-pod-network.c63311d4d5b771442da034ffe7043a2e8151ecb16ce4e44c545dd42066480e2b" Workload="ip--172--31--24--61-k8s-whisker--b986b57c8--g24l8-eth0" Sep 6 00:04:58.862813 env[1848]: 2025-09-06 00:04:58.527 [INFO][4314] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c63311d4d5b771442da034ffe7043a2e8151ecb16ce4e44c545dd42066480e2b" HandleID="k8s-pod-network.c63311d4d5b771442da034ffe7043a2e8151ecb16ce4e44c545dd42066480e2b" Workload="ip--172--31--24--61-k8s-whisker--b986b57c8--g24l8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400026aa50), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-61", "pod":"whisker-b986b57c8-g24l8", "timestamp":"2025-09-06 00:04:58.52702318 +0000 UTC"}, Hostname:"ip-172-31-24-61", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:04:58.862813 env[1848]: 2025-09-06 00:04:58.527 [INFO][4314] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:04:58.862813 env[1848]: 2025-09-06 00:04:58.528 [INFO][4314] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:04:58.862813 env[1848]: 2025-09-06 00:04:58.528 [INFO][4314] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-61' Sep 6 00:04:58.862813 env[1848]: 2025-09-06 00:04:58.572 [INFO][4314] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c63311d4d5b771442da034ffe7043a2e8151ecb16ce4e44c545dd42066480e2b" host="ip-172-31-24-61" Sep 6 00:04:58.862813 env[1848]: 2025-09-06 00:04:58.606 [INFO][4314] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-61" Sep 6 00:04:58.862813 env[1848]: 2025-09-06 00:04:58.623 [INFO][4314] ipam/ipam.go 511: Trying affinity for 192.168.40.128/26 host="ip-172-31-24-61" Sep 6 00:04:58.862813 env[1848]: 2025-09-06 00:04:58.628 [INFO][4314] ipam/ipam.go 158: Attempting to load block cidr=192.168.40.128/26 host="ip-172-31-24-61" Sep 6 00:04:58.862813 env[1848]: 2025-09-06 00:04:58.633 [INFO][4314] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.40.128/26 host="ip-172-31-24-61" Sep 6 00:04:58.862813 env[1848]: 2025-09-06 00:04:58.633 [INFO][4314] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.40.128/26 handle="k8s-pod-network.c63311d4d5b771442da034ffe7043a2e8151ecb16ce4e44c545dd42066480e2b" host="ip-172-31-24-61" Sep 6 00:04:58.862813 env[1848]: 2025-09-06 00:04:58.653 [INFO][4314] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c63311d4d5b771442da034ffe7043a2e8151ecb16ce4e44c545dd42066480e2b Sep 6 00:04:58.862813 env[1848]: 2025-09-06 00:04:58.668 [INFO][4314] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.40.128/26 handle="k8s-pod-network.c63311d4d5b771442da034ffe7043a2e8151ecb16ce4e44c545dd42066480e2b" host="ip-172-31-24-61" Sep 6 00:04:58.862813 env[1848]: 2025-09-06 00:04:58.690 [INFO][4314] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.40.129/26] block=192.168.40.128/26 handle="k8s-pod-network.c63311d4d5b771442da034ffe7043a2e8151ecb16ce4e44c545dd42066480e2b" host="ip-172-31-24-61" Sep 6 00:04:58.862813 env[1848]: 2025-09-06 00:04:58.690 [INFO][4314] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.40.129/26] handle="k8s-pod-network.c63311d4d5b771442da034ffe7043a2e8151ecb16ce4e44c545dd42066480e2b" host="ip-172-31-24-61" Sep 6 00:04:58.862813 env[1848]: 2025-09-06 00:04:58.690 [INFO][4314] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:04:58.862813 env[1848]: 2025-09-06 00:04:58.691 [INFO][4314] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.40.129/26] IPv6=[] ContainerID="c63311d4d5b771442da034ffe7043a2e8151ecb16ce4e44c545dd42066480e2b" HandleID="k8s-pod-network.c63311d4d5b771442da034ffe7043a2e8151ecb16ce4e44c545dd42066480e2b" Workload="ip--172--31--24--61-k8s-whisker--b986b57c8--g24l8-eth0" Sep 6 00:04:58.864396 env[1848]: 2025-09-06 00:04:58.696 [INFO][4258] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c63311d4d5b771442da034ffe7043a2e8151ecb16ce4e44c545dd42066480e2b" Namespace="calico-system" Pod="whisker-b986b57c8-g24l8" WorkloadEndpoint="ip--172--31--24--61-k8s-whisker--b986b57c8--g24l8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--61-k8s-whisker--b986b57c8--g24l8-eth0", GenerateName:"whisker-b986b57c8-", Namespace:"calico-system", SelfLink:"", UID:"23251729-d599-42e3-8acf-04f97e1f2425", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 4, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"b986b57c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-61", ContainerID:"", Pod:"whisker-b986b57c8-g24l8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.40.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali279e987bca3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:04:58.864396 env[1848]: 2025-09-06 00:04:58.696 [INFO][4258] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.129/32] ContainerID="c63311d4d5b771442da034ffe7043a2e8151ecb16ce4e44c545dd42066480e2b" Namespace="calico-system" Pod="whisker-b986b57c8-g24l8" WorkloadEndpoint="ip--172--31--24--61-k8s-whisker--b986b57c8--g24l8-eth0" Sep 6 00:04:58.864396 env[1848]: 2025-09-06 00:04:58.696 [INFO][4258] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali279e987bca3 ContainerID="c63311d4d5b771442da034ffe7043a2e8151ecb16ce4e44c545dd42066480e2b" Namespace="calico-system" Pod="whisker-b986b57c8-g24l8" WorkloadEndpoint="ip--172--31--24--61-k8s-whisker--b986b57c8--g24l8-eth0" Sep 6 00:04:58.864396 env[1848]: 2025-09-06 00:04:58.771 [INFO][4258] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c63311d4d5b771442da034ffe7043a2e8151ecb16ce4e44c545dd42066480e2b" Namespace="calico-system" Pod="whisker-b986b57c8-g24l8" WorkloadEndpoint="ip--172--31--24--61-k8s-whisker--b986b57c8--g24l8-eth0" Sep 6 00:04:58.864396 env[1848]: 2025-09-06 00:04:58.818 [INFO][4258] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c63311d4d5b771442da034ffe7043a2e8151ecb16ce4e44c545dd42066480e2b" Namespace="calico-system" Pod="whisker-b986b57c8-g24l8" WorkloadEndpoint="ip--172--31--24--61-k8s-whisker--b986b57c8--g24l8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--61-k8s-whisker--b986b57c8--g24l8-eth0", GenerateName:"whisker-b986b57c8-", Namespace:"calico-system", SelfLink:"", UID:"23251729-d599-42e3-8acf-04f97e1f2425", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 4, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"b986b57c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-61", ContainerID:"c63311d4d5b771442da034ffe7043a2e8151ecb16ce4e44c545dd42066480e2b", Pod:"whisker-b986b57c8-g24l8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.40.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali279e987bca3", MAC:"5e:43:01:9f:4b:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:04:58.864396 env[1848]: 2025-09-06 00:04:58.851 [INFO][4258] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c63311d4d5b771442da034ffe7043a2e8151ecb16ce4e44c545dd42066480e2b" Namespace="calico-system" Pod="whisker-b986b57c8-g24l8" WorkloadEndpoint="ip--172--31--24--61-k8s-whisker--b986b57c8--g24l8-eth0" Sep 6 00:04:58.913076 env[1848]: time="2025-09-06T00:04:58.910787775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:04:58.913076 env[1848]: time="2025-09-06T00:04:58.910973124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:04:58.913076 env[1848]: time="2025-09-06T00:04:58.911035411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:04:58.913076 env[1848]: time="2025-09-06T00:04:58.911393224Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c63311d4d5b771442da034ffe7043a2e8151ecb16ce4e44c545dd42066480e2b pid=4356 runtime=io.containerd.runc.v2 Sep 6 00:04:58.960903 env[1848]: 2025-09-06 00:04:58.552 [INFO][4309] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" Sep 6 00:04:58.960903 env[1848]: 2025-09-06 00:04:58.552 [INFO][4309] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" iface="eth0" netns="/var/run/netns/cni-329bcd3b-5f83-b97c-9d07-a82d7ddda401" Sep 6 00:04:58.960903 env[1848]: 2025-09-06 00:04:58.553 [INFO][4309] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" iface="eth0" netns="/var/run/netns/cni-329bcd3b-5f83-b97c-9d07-a82d7ddda401" Sep 6 00:04:58.960903 env[1848]: 2025-09-06 00:04:58.553 [INFO][4309] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" iface="eth0" netns="/var/run/netns/cni-329bcd3b-5f83-b97c-9d07-a82d7ddda401" Sep 6 00:04:58.960903 env[1848]: 2025-09-06 00:04:58.554 [INFO][4309] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" Sep 6 00:04:58.960903 env[1848]: 2025-09-06 00:04:58.554 [INFO][4309] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" Sep 6 00:04:58.960903 env[1848]: 2025-09-06 00:04:58.904 [INFO][4326] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" HandleID="k8s-pod-network.7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" Workload="ip--172--31--24--61-k8s-goldmane--7988f88666--8wwqq-eth0" Sep 6 00:04:58.960903 env[1848]: 2025-09-06 00:04:58.906 [INFO][4326] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:04:58.960903 env[1848]: 2025-09-06 00:04:58.906 [INFO][4326] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:04:58.960903 env[1848]: 2025-09-06 00:04:58.934 [WARNING][4326] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" HandleID="k8s-pod-network.7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" Workload="ip--172--31--24--61-k8s-goldmane--7988f88666--8wwqq-eth0" Sep 6 00:04:58.960903 env[1848]: 2025-09-06 00:04:58.935 [INFO][4326] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" HandleID="k8s-pod-network.7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" Workload="ip--172--31--24--61-k8s-goldmane--7988f88666--8wwqq-eth0" Sep 6 00:04:58.960903 env[1848]: 2025-09-06 00:04:58.939 [INFO][4326] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:04:58.960903 env[1848]: 2025-09-06 00:04:58.946 [INFO][4309] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" Sep 6 00:04:58.966607 systemd[1]: run-netns-cni\x2d329bcd3b\x2d5f83\x2db97c\x2d9d07\x2da82d7ddda401.mount: Deactivated successfully. Sep 6 00:04:58.974502 env[1848]: time="2025-09-06T00:04:58.973271089Z" level=info msg="TearDown network for sandbox \"7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b\" successfully" Sep 6 00:04:58.974502 env[1848]: time="2025-09-06T00:04:58.973340480Z" level=info msg="StopPodSandbox for \"7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b\" returns successfully" Sep 6 00:04:58.975566 env[1848]: time="2025-09-06T00:04:58.975515467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-8wwqq,Uid:46f8567e-23c4-451e-82e1-50ad6fac0a71,Namespace:calico-system,Attempt:1,}" Sep 6 00:04:59.246490 env[1848]: time="2025-09-06T00:04:59.246388744Z" level=info msg="StopPodSandbox for \"f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827\"" Sep 6 00:04:59.247521 env[1848]: time="2025-09-06T00:04:59.247456786Z" level=info msg="StopPodSandbox for \"52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d\"" Sep 6 00:04:59.247944 env[1848]: time="2025-09-06T00:04:59.247595304Z" level=info msg="StopPodSandbox for \"28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120\"" Sep 6 00:04:59.399000 audit[4475]: AVC avc: denied { bpf } for pid=4475 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.399000 audit[4475]: AVC avc: denied { bpf } for pid=4475 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.399000 audit[4475]: AVC avc: denied { perfmon } for pid=4475 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.399000 audit[4475]: AVC avc: denied { perfmon } for pid=4475 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.399000 audit[4475]: AVC avc: denied { perfmon } for pid=4475 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.399000 audit[4475]: AVC avc: denied { perfmon } for pid=4475 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.399000 audit[4475]: AVC avc: denied { perfmon } for pid=4475 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.399000 audit[4475]: AVC avc: denied { bpf } for pid=4475 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.399000 audit[4475]: AVC avc: denied { bpf } for pid=4475 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.399000 audit: BPF prog-id=10 op=LOAD Sep 6 00:04:59.399000 audit[4475]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffac6a058 a2=98 a3=fffffac6a048 items=0 ppid=4211 pid=4475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:59.399000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 6 00:04:59.400000 audit: BPF prog-id=10 op=UNLOAD Sep 6 00:04:59.406000 audit[4475]: AVC avc: denied { bpf } for pid=4475 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.406000 audit[4475]: AVC avc: denied { bpf } for pid=4475 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.406000 audit[4475]: AVC avc: denied { perfmon } for pid=4475 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.406000 audit[4475]: AVC avc: denied { perfmon } for pid=4475 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.406000 audit[4475]: AVC avc: denied { perfmon } for pid=4475 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.406000 audit[4475]: AVC avc: denied { perfmon } for pid=4475 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.406000 audit[4475]: AVC avc: denied { perfmon } for pid=4475 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.406000 audit[4475]: AVC avc: denied { bpf } for pid=4475 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.406000 audit[4475]: AVC avc: denied { bpf } for pid=4475 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.406000 audit: BPF prog-id=11 op=LOAD Sep 6 00:04:59.406000 audit[4475]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffac69f08 a2=74 a3=95 items=0 ppid=4211 pid=4475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:59.406000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 6 00:04:59.408000 audit: BPF prog-id=11 op=UNLOAD Sep 6 00:04:59.408000 audit[4475]: AVC avc: denied { bpf } for pid=4475 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.408000 audit[4475]: AVC avc: denied { bpf } for pid=4475 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.408000 audit[4475]: AVC avc: denied { perfmon } for pid=4475 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.408000 audit[4475]: AVC avc: denied { perfmon } for pid=4475 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.408000 audit[4475]: AVC avc: denied { perfmon } for pid=4475 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.408000 audit[4475]: AVC avc: denied { perfmon } for pid=4475 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.408000 audit[4475]: AVC avc: denied { perfmon } for pid=4475 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.408000 audit[4475]: AVC avc: denied { bpf } for pid=4475 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.408000 audit[4475]: AVC avc: denied { bpf } for pid=4475 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.408000 audit: BPF prog-id=12 op=LOAD Sep 6 00:04:59.408000 audit[4475]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffac69f38 a2=40 a3=fffffac69f68 items=0 ppid=4211 pid=4475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:59.408000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 6 00:04:59.408000 audit: BPF prog-id=12 op=UNLOAD Sep 6 00:04:59.408000 audit[4475]: AVC avc: denied { perfmon } for pid=4475 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.408000 audit[4475]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=0 a1=fffffac6a050 a2=50 a3=0 items=0 ppid=4211 pid=4475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:59.408000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 6 00:04:59.415000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.415000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.415000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.415000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.415000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.415000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.415000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.415000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.415000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.415000 audit: BPF prog-id=13 op=LOAD Sep 6 00:04:59.415000 audit[4482]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd2d797a8 a2=98 a3=ffffd2d79798 items=0 ppid=4211 pid=4482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:59.415000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:04:59.415000 audit: BPF prog-id=13 op=UNLOAD Sep 6 00:04:59.415000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.415000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.415000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.415000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.415000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.415000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.415000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.415000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.415000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.415000 audit: BPF prog-id=14 op=LOAD Sep 6 00:04:59.415000 audit[4482]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd2d79438 a2=74 a3=95 items=0 ppid=4211 pid=4482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:59.415000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:04:59.416000 audit: BPF prog-id=14 op=UNLOAD Sep 6 00:04:59.416000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.416000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.416000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.416000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.416000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.416000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.416000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.416000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.416000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.416000 audit: BPF prog-id=15 op=LOAD Sep 6 00:04:59.416000 audit[4482]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd2d79498 a2=94 a3=2 items=0 ppid=4211 pid=4482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:59.416000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:04:59.416000 audit: BPF prog-id=15 op=UNLOAD Sep 6 00:04:59.613312 env[1848]: time="2025-09-06T00:04:59.609853891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b986b57c8-g24l8,Uid:23251729-d599-42e3-8acf-04f97e1f2425,Namespace:calico-system,Attempt:0,} returns sandbox id \"c63311d4d5b771442da034ffe7043a2e8151ecb16ce4e44c545dd42066480e2b\"" Sep 6 00:04:59.632150 env[1848]: time="2025-09-06T00:04:59.630466659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 6 00:04:59.763000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.763000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.763000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.763000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.763000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.763000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.763000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.763000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.763000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.763000 audit: BPF prog-id=16 op=LOAD Sep 6 00:04:59.763000 audit[4482]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd2d79458 a2=40 a3=ffffd2d79488 items=0 ppid=4211 pid=4482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:59.763000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:04:59.764000 audit: BPF prog-id=16 op=UNLOAD Sep 6 00:04:59.764000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.764000 audit[4482]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffd2d79570 a2=50 a3=0 items=0 ppid=4211 pid=4482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:59.764000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:04:59.787000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.787000 audit[4482]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd2d794c8 a2=28 a3=ffffd2d795f8 items=0 ppid=4211 pid=4482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:59.787000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:04:59.787000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.787000 audit[4482]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd2d794f8 a2=28 a3=ffffd2d79628 items=0 ppid=4211 pid=4482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:59.787000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:04:59.788000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.788000 audit[4482]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd2d793a8 a2=28 a3=ffffd2d794d8 items=0 ppid=4211 pid=4482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:59.788000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:04:59.788000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.788000 audit[4482]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd2d79518 a2=28 a3=ffffd2d79648 items=0 ppid=4211 pid=4482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:59.788000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:04:59.788000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.788000 audit[4482]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd2d794f8 a2=28 a3=ffffd2d79628 items=0 ppid=4211 pid=4482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:59.788000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:04:59.788000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.788000 audit[4482]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd2d794e8 a2=28 a3=ffffd2d79618 items=0 ppid=4211 pid=4482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:59.788000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:04:59.788000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.788000 audit[4482]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd2d79518 a2=28 a3=ffffd2d79648 items=0 ppid=4211 pid=4482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:59.788000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:04:59.788000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.805507 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:04:59.805594 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali81342935570: link becomes ready Sep 6 00:04:59.788000 audit[4482]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd2d794f8 a2=28 a3=ffffd2d79628 items=0 ppid=4211 pid=4482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:59.788000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:04:59.788000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.788000 audit[4482]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd2d79518 a2=28 a3=ffffd2d79648 items=0 ppid=4211 pid=4482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:59.788000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:04:59.788000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.788000 audit[4482]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd2d794e8 a2=28 a3=ffffd2d79618 items=0 ppid=4211 pid=4482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:59.788000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:04:59.788000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.788000 audit[4482]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd2d79568 a2=28 a3=ffffd2d796a8 items=0 ppid=4211 pid=4482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:59.788000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:04:59.788000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.788000 audit[4482]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffd2d792a0 a2=50 a3=0 items=0 ppid=4211 pid=4482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:59.788000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:04:59.788000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.788000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.788000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.788000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.788000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.788000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.788000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.788000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.788000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.788000 audit: BPF prog-id=17 op=LOAD Sep 6 00:04:59.788000 audit[4482]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd2d792a8 a2=94 a3=5 items=0 ppid=4211 pid=4482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:59.788000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:04:59.799000 audit: BPF prog-id=17 op=UNLOAD Sep 6 00:04:59.799000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.799000 audit[4482]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffd2d793b0 a2=50 a3=0 items=0 ppid=4211 pid=4482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:59.799000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:04:59.799000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.799000 audit[4482]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffd2d794f8 a2=4 a3=3 items=0 ppid=4211 pid=4482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:59.799000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:04:59.799000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.799000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.799000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.799000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.799000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.799000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.799000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.799000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.799000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.799000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.799000 audit[4482]: AVC avc: denied { confidentiality } for pid=4482 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 00:04:59.799000 audit[4482]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd2d794d8 a2=94 a3=6 items=0 ppid=4211 pid=4482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:59.799000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:04:59.801000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.801000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.801000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.801000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.801000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.801000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.801000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.801000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.801000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.801000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.801000 audit[4482]: AVC avc: denied { confidentiality } for pid=4482 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 00:04:59.801000 audit[4482]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd2d78ca8 a2=94 a3=83 items=0 ppid=4211 pid=4482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:59.801000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:04:59.805000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.805000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.805000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.805000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.805000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.805000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.805000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.805000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.805000 audit[4482]: AVC avc: denied { perfmon } for pid=4482 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.805000 audit[4482]: AVC avc: denied { bpf } for pid=4482 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.805000 audit[4482]: AVC avc: denied { confidentiality } for pid=4482 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 00:04:59.805000 audit[4482]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd2d78ca8 a2=94 a3=83 items=0 ppid=4211 pid=4482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:59.805000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 00:04:59.803475 systemd-networkd[1511]: cali81342935570: Link UP Sep 6 00:04:59.803988 systemd-networkd[1511]: cali81342935570: Gained carrier Sep 6 00:04:59.863000 audit[4509]: AVC avc: denied { bpf } for pid=4509 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.863000 audit[4509]: AVC avc: denied { bpf } for pid=4509 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.863000 audit[4509]: AVC avc: denied { perfmon } for pid=4509 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.863000 audit[4509]: AVC avc: denied { perfmon } for pid=4509 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.863000 audit[4509]: AVC avc: denied { perfmon } for pid=4509 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.863000 audit[4509]: AVC avc: denied { perfmon } for pid=4509 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.863000 audit[4509]: AVC avc: denied { perfmon } for pid=4509 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.863000 audit[4509]: AVC avc: denied { bpf } for pid=4509 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.863000 audit[4509]: AVC avc: denied { bpf } for pid=4509 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.863000 audit: BPF prog-id=18 op=LOAD Sep 6 00:04:59.863000 audit[4509]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd27b3c28 a2=98 a3=ffffd27b3c18 items=0 ppid=4211 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:59.863000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 6 00:04:59.864000 audit: BPF prog-id=18 op=UNLOAD Sep 6 00:04:59.864000 audit[4509]: AVC avc: denied { bpf } for pid=4509 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.864000 audit[4509]: AVC avc: denied { bpf } for pid=4509 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.864000 audit[4509]: AVC avc: denied { perfmon } for pid=4509 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.864000 audit[4509]: AVC avc: denied { perfmon } for pid=4509 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.864000 audit[4509]: AVC avc: denied { perfmon } for pid=4509 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.864000 audit[4509]: AVC avc: denied { perfmon } for pid=4509 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.864000 audit[4509]: AVC avc: denied { perfmon } for pid=4509 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.864000 audit[4509]: AVC avc: denied { bpf } for pid=4509 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.864000 audit[4509]: AVC avc: denied { bpf } for pid=4509 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.864000 audit: BPF prog-id=19 op=LOAD Sep 6 00:04:59.864000 audit[4509]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd27b3ad8 a2=74 a3=95 items=0 ppid=4211 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:59.864000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 6 00:04:59.864000 audit: BPF prog-id=19 op=UNLOAD Sep 6 00:04:59.864000 audit[4509]: AVC avc: denied { bpf } for pid=4509 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.864000 audit[4509]: AVC avc: denied { bpf } for pid=4509 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.864000 audit[4509]: AVC avc: denied { perfmon } for pid=4509 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.864000 audit[4509]: AVC avc: denied { perfmon } for pid=4509 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.864000 audit[4509]: AVC avc: denied { perfmon } for pid=4509 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.864000 audit[4509]: AVC avc: denied { perfmon } for pid=4509 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.864000 audit[4509]: AVC avc: denied { perfmon } for pid=4509 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.864000 audit[4509]: AVC avc: denied { bpf } for pid=4509 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.864000 audit[4509]: AVC avc: denied { bpf } for pid=4509 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:04:59.864000 audit: BPF prog-id=20 op=LOAD Sep 6 00:04:59.864000 audit[4509]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd27b3b08 a2=40 a3=ffffd27b3b38 items=0 ppid=4211 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:04:59.864000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 6 00:04:59.865000 audit: BPF prog-id=20 op=UNLOAD Sep 6 00:04:59.895517 env[1848]: 2025-09-06 00:04:59.257 [INFO][4379] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--61-k8s-goldmane--7988f88666--8wwqq-eth0 goldmane-7988f88666- calico-system 46f8567e-23c4-451e-82e1-50ad6fac0a71 919 0 2025-09-06 00:04:32 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-24-61 goldmane-7988f88666-8wwqq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali81342935570 [] [] }} ContainerID="a0ab623f15e98608017ad9220884152e470e514588f60c0ff6dc05dd103e618e" Namespace="calico-system" Pod="goldmane-7988f88666-8wwqq" WorkloadEndpoint="ip--172--31--24--61-k8s-goldmane--7988f88666--8wwqq-" Sep 6 00:04:59.895517 env[1848]: 2025-09-06 00:04:59.257 [INFO][4379] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a0ab623f15e98608017ad9220884152e470e514588f60c0ff6dc05dd103e618e" Namespace="calico-system" Pod="goldmane-7988f88666-8wwqq" WorkloadEndpoint="ip--172--31--24--61-k8s-goldmane--7988f88666--8wwqq-eth0" Sep 6 00:04:59.895517 env[1848]: 2025-09-06 00:04:59.636 [INFO][4435] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a0ab623f15e98608017ad9220884152e470e514588f60c0ff6dc05dd103e618e" HandleID="k8s-pod-network.a0ab623f15e98608017ad9220884152e470e514588f60c0ff6dc05dd103e618e" Workload="ip--172--31--24--61-k8s-goldmane--7988f88666--8wwqq-eth0" Sep 6 00:04:59.895517 env[1848]: 2025-09-06 00:04:59.638 [INFO][4435] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a0ab623f15e98608017ad9220884152e470e514588f60c0ff6dc05dd103e618e" HandleID="k8s-pod-network.a0ab623f15e98608017ad9220884152e470e514588f60c0ff6dc05dd103e618e" Workload="ip--172--31--24--61-k8s-goldmane--7988f88666--8wwqq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000386190), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-61", "pod":"goldmane-7988f88666-8wwqq", "timestamp":"2025-09-06 00:04:59.63609756 +0000 UTC"}, Hostname:"ip-172-31-24-61", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:04:59.895517 env[1848]: 2025-09-06 00:04:59.639 [INFO][4435] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:04:59.895517 env[1848]: 2025-09-06 00:04:59.640 [INFO][4435] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:04:59.895517 env[1848]: 2025-09-06 00:04:59.641 [INFO][4435] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-61' Sep 6 00:04:59.895517 env[1848]: 2025-09-06 00:04:59.686 [INFO][4435] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a0ab623f15e98608017ad9220884152e470e514588f60c0ff6dc05dd103e618e" host="ip-172-31-24-61" Sep 6 00:04:59.895517 env[1848]: 2025-09-06 00:04:59.713 [INFO][4435] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-61" Sep 6 00:04:59.895517 env[1848]: 2025-09-06 00:04:59.730 [INFO][4435] ipam/ipam.go 511: Trying affinity for 192.168.40.128/26 host="ip-172-31-24-61" Sep 6 00:04:59.895517 env[1848]: 2025-09-06 00:04:59.734 [INFO][4435] ipam/ipam.go 158: Attempting to load block cidr=192.168.40.128/26 host="ip-172-31-24-61" Sep 6 00:04:59.895517 env[1848]: 2025-09-06 00:04:59.739 [INFO][4435] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.40.128/26 host="ip-172-31-24-61" Sep 6 00:04:59.895517 env[1848]: 2025-09-06 00:04:59.740 [INFO][4435] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.40.128/26 handle="k8s-pod-network.a0ab623f15e98608017ad9220884152e470e514588f60c0ff6dc05dd103e618e" host="ip-172-31-24-61" Sep 6 00:04:59.895517 env[1848]: 2025-09-06 00:04:59.743 [INFO][4435] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a0ab623f15e98608017ad9220884152e470e514588f60c0ff6dc05dd103e618e Sep 6 00:04:59.895517 env[1848]: 2025-09-06 00:04:59.753 [INFO][4435] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.40.128/26 handle="k8s-pod-network.a0ab623f15e98608017ad9220884152e470e514588f60c0ff6dc05dd103e618e" host="ip-172-31-24-61" Sep 6 00:04:59.895517 env[1848]: 2025-09-06 00:04:59.768 [INFO][4435] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.40.130/26] block=192.168.40.128/26 handle="k8s-pod-network.a0ab623f15e98608017ad9220884152e470e514588f60c0ff6dc05dd103e618e" host="ip-172-31-24-61" Sep 6 00:04:59.895517 env[1848]: 2025-09-06 00:04:59.768 [INFO][4435] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.40.130/26] handle="k8s-pod-network.a0ab623f15e98608017ad9220884152e470e514588f60c0ff6dc05dd103e618e" host="ip-172-31-24-61" Sep 6 00:04:59.895517 env[1848]: 2025-09-06 00:04:59.769 [INFO][4435] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:04:59.895517 env[1848]: 2025-09-06 00:04:59.769 [INFO][4435] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.40.130/26] IPv6=[] ContainerID="a0ab623f15e98608017ad9220884152e470e514588f60c0ff6dc05dd103e618e" HandleID="k8s-pod-network.a0ab623f15e98608017ad9220884152e470e514588f60c0ff6dc05dd103e618e" Workload="ip--172--31--24--61-k8s-goldmane--7988f88666--8wwqq-eth0" Sep 6 00:04:59.897033 env[1848]: 2025-09-06 00:04:59.790 [INFO][4379] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a0ab623f15e98608017ad9220884152e470e514588f60c0ff6dc05dd103e618e" Namespace="calico-system" Pod="goldmane-7988f88666-8wwqq" WorkloadEndpoint="ip--172--31--24--61-k8s-goldmane--7988f88666--8wwqq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--61-k8s-goldmane--7988f88666--8wwqq-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"46f8567e-23c4-451e-82e1-50ad6fac0a71", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 4, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-61", ContainerID:"", Pod:"goldmane-7988f88666-8wwqq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.40.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali81342935570", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:04:59.897033 env[1848]: 2025-09-06 00:04:59.790 [INFO][4379] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.130/32] ContainerID="a0ab623f15e98608017ad9220884152e470e514588f60c0ff6dc05dd103e618e" Namespace="calico-system" Pod="goldmane-7988f88666-8wwqq" WorkloadEndpoint="ip--172--31--24--61-k8s-goldmane--7988f88666--8wwqq-eth0" Sep 6 00:04:59.897033 env[1848]: 2025-09-06 00:04:59.790 [INFO][4379] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali81342935570 ContainerID="a0ab623f15e98608017ad9220884152e470e514588f60c0ff6dc05dd103e618e" Namespace="calico-system" Pod="goldmane-7988f88666-8wwqq" WorkloadEndpoint="ip--172--31--24--61-k8s-goldmane--7988f88666--8wwqq-eth0" Sep 6 00:04:59.897033 env[1848]: 2025-09-06 00:04:59.800 [INFO][4379] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a0ab623f15e98608017ad9220884152e470e514588f60c0ff6dc05dd103e618e" Namespace="calico-system" Pod="goldmane-7988f88666-8wwqq" WorkloadEndpoint="ip--172--31--24--61-k8s-goldmane--7988f88666--8wwqq-eth0" Sep 6 00:04:59.897033 env[1848]: 2025-09-06 00:04:59.807 [INFO][4379] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a0ab623f15e98608017ad9220884152e470e514588f60c0ff6dc05dd103e618e" Namespace="calico-system" Pod="goldmane-7988f88666-8wwqq" WorkloadEndpoint="ip--172--31--24--61-k8s-goldmane--7988f88666--8wwqq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--61-k8s-goldmane--7988f88666--8wwqq-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"46f8567e-23c4-451e-82e1-50ad6fac0a71", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 4, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-61", ContainerID:"a0ab623f15e98608017ad9220884152e470e514588f60c0ff6dc05dd103e618e", Pod:"goldmane-7988f88666-8wwqq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.40.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali81342935570", MAC:"de:b2:da:49:d0:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:04:59.897033 env[1848]: 2025-09-06 00:04:59.859 [INFO][4379] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a0ab623f15e98608017ad9220884152e470e514588f60c0ff6dc05dd103e618e" Namespace="calico-system" Pod="goldmane-7988f88666-8wwqq" WorkloadEndpoint="ip--172--31--24--61-k8s-goldmane--7988f88666--8wwqq-eth0" Sep 6 00:05:00.146432 env[1848]: time="2025-09-06T00:05:00.146052769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:05:00.146432 env[1848]: time="2025-09-06T00:05:00.146140039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:05:00.146432 env[1848]: time="2025-09-06T00:05:00.146165573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:05:00.150775 env[1848]: time="2025-09-06T00:05:00.149286320Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a0ab623f15e98608017ad9220884152e470e514588f60c0ff6dc05dd103e618e pid=4541 runtime=io.containerd.runc.v2 Sep 6 00:05:00.198310 systemd-networkd[1511]: cali279e987bca3: Gained IPv6LL Sep 6 00:05:00.198862 systemd-networkd[1511]: vxlan.calico: Link UP Sep 6 00:05:00.198871 systemd-networkd[1511]: vxlan.calico: Gained carrier Sep 6 00:05:00.272000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.272000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.272000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.272000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.272000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.272000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.272000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.272000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.272000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.272000 audit: BPF prog-id=21 op=LOAD Sep 6 00:05:00.272000 audit[4572]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffa43c128 a2=98 a3=fffffa43c118 items=0 ppid=4211 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:00.272000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:05:00.272000 audit: BPF prog-id=21 op=UNLOAD Sep 6 00:05:00.273000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.273000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.273000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.273000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.273000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.273000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.273000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.273000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.273000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.273000 audit: BPF prog-id=22 op=LOAD Sep 6 00:05:00.273000 audit[4572]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffa43be08 a2=74 a3=95 items=0 ppid=4211 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:00.273000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:05:00.273000 audit: BPF prog-id=22 op=UNLOAD Sep 6 00:05:00.273000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.273000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.273000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.273000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.273000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.273000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.273000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.273000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.273000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.273000 audit: BPF prog-id=23 op=LOAD Sep 6 00:05:00.273000 audit[4572]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffa43be68 a2=94 a3=2 items=0 ppid=4211 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:00.273000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:05:00.273000 audit: BPF prog-id=23 op=UNLOAD Sep 6 00:05:00.273000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.273000 audit[4572]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffffa43be98 a2=28 a3=fffffa43bfc8 items=0 ppid=4211 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:00.273000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:05:00.273000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.273000 audit[4572]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffa43bec8 a2=28 a3=fffffa43bff8 items=0 ppid=4211 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:00.273000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:05:00.273000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.273000 audit[4572]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffa43bd78 a2=28 a3=fffffa43bea8 items=0 ppid=4211 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:00.273000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:05:00.273000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.273000 audit[4572]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffffa43bee8 a2=28 a3=fffffa43c018 items=0 ppid=4211 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:00.273000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:05:00.273000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.273000 audit[4572]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffffa43bec8 a2=28 a3=fffffa43bff8 items=0 ppid=4211 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:00.273000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:05:00.273000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.273000 audit[4572]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffffa43beb8 a2=28 a3=fffffa43bfe8 items=0 ppid=4211 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:00.273000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:05:00.274000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.274000 audit[4572]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffffa43bee8 a2=28 a3=fffffa43c018 items=0 ppid=4211 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:00.274000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:05:00.274000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.274000 audit[4572]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffa43bec8 a2=28 a3=fffffa43bff8 items=0 ppid=4211 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:00.274000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:05:00.274000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.274000 audit[4572]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffa43bee8 a2=28 a3=fffffa43c018 items=0 ppid=4211 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:00.274000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:05:00.274000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.274000 audit[4572]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffa43beb8 a2=28 a3=fffffa43bfe8 items=0 ppid=4211 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:00.274000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:05:00.274000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.274000 audit[4572]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffffa43bf38 a2=28 a3=fffffa43c078 items=0 ppid=4211 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:00.274000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:05:00.275000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.275000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.275000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.275000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.275000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.275000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.275000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.275000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.275000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.275000 audit: BPF prog-id=24 op=LOAD Sep 6 00:05:00.275000 audit[4572]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffa43bd58 a2=40 a3=fffffa43bd88 items=0 ppid=4211 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:00.275000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:05:00.275000 audit: BPF prog-id=24 op=UNLOAD Sep 6 00:05:00.277000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.277000 audit[4572]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=0 a1=fffffa43bd80 a2=50 a3=0 items=0 ppid=4211 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:00.277000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:05:00.277000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.277000 audit[4572]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=0 a1=fffffa43bd80 a2=50 a3=0 items=0 ppid=4211 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:00.277000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:05:00.277000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.277000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.277000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.277000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.277000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.277000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.277000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.277000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.277000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.277000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.277000 audit: BPF prog-id=25 op=LOAD Sep 6 00:05:00.277000 audit[4572]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffa43b4e8 a2=94 a3=2 items=0 ppid=4211 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:00.277000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:05:00.278000 audit: BPF prog-id=25 op=UNLOAD Sep 6 00:05:00.278000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.278000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.278000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.278000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.278000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.278000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.278000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.278000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.278000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.278000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.278000 audit: BPF prog-id=26 op=LOAD Sep 6 00:05:00.278000 audit[4572]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffa43b678 a2=94 a3=30 items=0 ppid=4211 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:00.278000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 00:05:00.286000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.286000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.286000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.286000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.286000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.286000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.286000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.286000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.286000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.286000 audit: BPF prog-id=27 op=LOAD Sep 6 00:05:00.286000 audit[4576]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc9fc91d8 a2=98 a3=ffffc9fc91c8 items=0 ppid=4211 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:00.286000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:05:00.286000 audit: BPF prog-id=27 op=UNLOAD Sep 6 00:05:00.286000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.286000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.286000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.286000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.286000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.286000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.286000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.286000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.286000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.286000 audit: BPF prog-id=28 op=LOAD Sep 6 00:05:00.286000 audit[4576]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc9fc8e68 a2=74 a3=95 items=0 ppid=4211 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:00.286000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:05:00.287000 audit: BPF prog-id=28 op=UNLOAD Sep 6 00:05:00.287000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.287000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.287000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.287000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.287000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.287000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.287000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.287000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.287000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:00.287000 audit: BPF prog-id=29 op=LOAD Sep 6 00:05:00.287000 audit[4576]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc9fc8ec8 a2=94 a3=2 items=0 ppid=4211 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:00.287000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:05:00.287000 audit: BPF prog-id=29 op=UNLOAD Sep 6 00:05:00.275067 (udev-worker)[4105]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:05:00.380230 systemd[1]: run-containerd-runc-k8s.io-a0ab623f15e98608017ad9220884152e470e514588f60c0ff6dc05dd103e618e-runc.wJFpu1.mount: Deactivated successfully. Sep 6 00:05:00.469459 env[1848]: 2025-09-06 00:05:00.079 [INFO][4461] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" Sep 6 00:05:00.469459 env[1848]: 2025-09-06 00:05:00.080 [INFO][4461] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" iface="eth0" netns="/var/run/netns/cni-757fbcd0-212d-a2da-c724-ff247420eb1a" Sep 6 00:05:00.469459 env[1848]: 2025-09-06 00:05:00.086 [INFO][4461] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" iface="eth0" netns="/var/run/netns/cni-757fbcd0-212d-a2da-c724-ff247420eb1a" Sep 6 00:05:00.469459 env[1848]: 2025-09-06 00:05:00.086 [INFO][4461] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" iface="eth0" netns="/var/run/netns/cni-757fbcd0-212d-a2da-c724-ff247420eb1a" Sep 6 00:05:00.469459 env[1848]: 2025-09-06 00:05:00.087 [INFO][4461] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" Sep 6 00:05:00.469459 env[1848]: 2025-09-06 00:05:00.087 [INFO][4461] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" Sep 6 00:05:00.469459 env[1848]: 2025-09-06 00:05:00.398 [INFO][4543] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" HandleID="k8s-pod-network.28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" Workload="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--zgcsm-eth0" Sep 6 00:05:00.469459 env[1848]: 2025-09-06 00:05:00.402 [INFO][4543] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:05:00.469459 env[1848]: 2025-09-06 00:05:00.403 [INFO][4543] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:05:00.469459 env[1848]: 2025-09-06 00:05:00.440 [WARNING][4543] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" HandleID="k8s-pod-network.28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" Workload="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--zgcsm-eth0" Sep 6 00:05:00.469459 env[1848]: 2025-09-06 00:05:00.441 [INFO][4543] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" HandleID="k8s-pod-network.28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" Workload="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--zgcsm-eth0" Sep 6 00:05:00.469459 env[1848]: 2025-09-06 00:05:00.447 [INFO][4543] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:05:00.469459 env[1848]: 2025-09-06 00:05:00.465 [INFO][4461] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" Sep 6 00:05:00.476844 systemd[1]: run-netns-cni\x2d757fbcd0\x2d212d\x2da2da\x2dc724\x2dff247420eb1a.mount: Deactivated successfully. Sep 6 00:05:00.477970 env[1848]: time="2025-09-06T00:05:00.477372282Z" level=info msg="TearDown network for sandbox \"28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120\" successfully" Sep 6 00:05:00.478964 env[1848]: time="2025-09-06T00:05:00.478878899Z" level=info msg="StopPodSandbox for \"28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120\" returns successfully" Sep 6 00:05:00.482485 env[1848]: time="2025-09-06T00:05:00.482399135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-797f9c6c85-zgcsm,Uid:d3def65b-39c6-44ff-8623-e889bb6d6c02,Namespace:calico-apiserver,Attempt:1,}" Sep 6 00:05:00.508897 env[1848]: 2025-09-06 00:04:59.668 [INFO][4457] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" Sep 6 00:05:00.508897 env[1848]: 2025-09-06 00:04:59.668 [INFO][4457] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" iface="eth0" netns="/var/run/netns/cni-2d723290-7183-3a0f-03ff-26405ede18b3" Sep 6 00:05:00.508897 env[1848]: 2025-09-06 00:04:59.678 [INFO][4457] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" iface="eth0" netns="/var/run/netns/cni-2d723290-7183-3a0f-03ff-26405ede18b3" Sep 6 00:05:00.508897 env[1848]: 2025-09-06 00:04:59.679 [INFO][4457] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" iface="eth0" netns="/var/run/netns/cni-2d723290-7183-3a0f-03ff-26405ede18b3" Sep 6 00:05:00.508897 env[1848]: 2025-09-06 00:04:59.679 [INFO][4457] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" Sep 6 00:05:00.508897 env[1848]: 2025-09-06 00:04:59.679 [INFO][4457] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" Sep 6 00:05:00.508897 env[1848]: 2025-09-06 00:05:00.409 [INFO][4497] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" HandleID="k8s-pod-network.52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" Workload="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--dz8br-eth0" Sep 6 00:05:00.508897 env[1848]: 2025-09-06 00:05:00.410 [INFO][4497] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:05:00.508897 env[1848]: 2025-09-06 00:05:00.447 [INFO][4497] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:05:00.508897 env[1848]: 2025-09-06 00:05:00.483 [WARNING][4497] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" HandleID="k8s-pod-network.52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" Workload="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--dz8br-eth0" Sep 6 00:05:00.508897 env[1848]: 2025-09-06 00:05:00.483 [INFO][4497] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" HandleID="k8s-pod-network.52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" Workload="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--dz8br-eth0" Sep 6 00:05:00.508897 env[1848]: 2025-09-06 00:05:00.493 [INFO][4497] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:05:00.508897 env[1848]: 2025-09-06 00:05:00.502 [INFO][4457] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" Sep 6 00:05:00.520697 env[1848]: time="2025-09-06T00:05:00.520599785Z" level=info msg="TearDown network for sandbox \"52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d\" successfully" Sep 6 00:05:00.520948 env[1848]: time="2025-09-06T00:05:00.520902248Z" level=info msg="StopPodSandbox for \"52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d\" returns successfully" Sep 6 00:05:00.527399 env[1848]: time="2025-09-06T00:05:00.527036277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dz8br,Uid:4dca1103-4d65-4dd5-9f1d-b159dc97b5c8,Namespace:kube-system,Attempt:1,}" Sep 6 00:05:00.581504 env[1848]: 2025-09-06 00:04:59.932 [INFO][4450] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" Sep 6 00:05:00.581504 env[1848]: 2025-09-06 00:04:59.933 [INFO][4450] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" iface="eth0" netns="/var/run/netns/cni-7af89354-8c7a-00c8-efa8-552ecd6250c7" Sep 6 00:05:00.581504 env[1848]: 2025-09-06 00:04:59.933 [INFO][4450] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" iface="eth0" netns="/var/run/netns/cni-7af89354-8c7a-00c8-efa8-552ecd6250c7" Sep 6 00:05:00.581504 env[1848]: 2025-09-06 00:04:59.934 [INFO][4450] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" iface="eth0" netns="/var/run/netns/cni-7af89354-8c7a-00c8-efa8-552ecd6250c7" Sep 6 00:05:00.581504 env[1848]: 2025-09-06 00:04:59.940 [INFO][4450] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" Sep 6 00:05:00.581504 env[1848]: 2025-09-06 00:04:59.941 [INFO][4450] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" Sep 6 00:05:00.581504 env[1848]: 2025-09-06 00:05:00.413 [INFO][4526] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" HandleID="k8s-pod-network.f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" Workload="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--mt5pq-eth0" Sep 6 00:05:00.581504 env[1848]: 2025-09-06 00:05:00.413 [INFO][4526] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:05:00.581504 env[1848]: 2025-09-06 00:05:00.494 [INFO][4526] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:05:00.581504 env[1848]: 2025-09-06 00:05:00.525 [WARNING][4526] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" HandleID="k8s-pod-network.f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" Workload="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--mt5pq-eth0" Sep 6 00:05:00.581504 env[1848]: 2025-09-06 00:05:00.526 [INFO][4526] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" HandleID="k8s-pod-network.f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" Workload="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--mt5pq-eth0" Sep 6 00:05:00.581504 env[1848]: 2025-09-06 00:05:00.550 [INFO][4526] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:05:00.581504 env[1848]: 2025-09-06 00:05:00.570 [INFO][4450] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" Sep 6 00:05:00.587093 systemd[1]: run-netns-cni\x2d2d723290\x2d7183\x2d3a0f\x2d03ff\x2d26405ede18b3.mount: Deactivated successfully. Sep 6 00:05:00.591449 env[1848]: time="2025-09-06T00:05:00.591349314Z" level=info msg="TearDown network for sandbox \"f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827\" successfully" Sep 6 00:05:00.591449 env[1848]: time="2025-09-06T00:05:00.591438588Z" level=info msg="StopPodSandbox for \"f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827\" returns successfully" Sep 6 00:05:00.597461 env[1848]: time="2025-09-06T00:05:00.597399817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mt5pq,Uid:695d8e71-3eef-4ad5-94de-576fc3ef9397,Namespace:kube-system,Attempt:1,}" Sep 6 00:05:00.616856 systemd[1]: run-netns-cni\x2d7af89354\x2d8c7a\x2d00c8\x2defa8\x2d552ecd6250c7.mount: Deactivated successfully. Sep 6 00:05:00.901478 systemd-networkd[1511]: cali81342935570: Gained IPv6LL Sep 6 00:05:00.919015 env[1848]: time="2025-09-06T00:05:00.918942950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-8wwqq,Uid:46f8567e-23c4-451e-82e1-50ad6fac0a71,Namespace:calico-system,Attempt:1,} returns sandbox id \"a0ab623f15e98608017ad9220884152e470e514588f60c0ff6dc05dd103e618e\"" Sep 6 00:05:01.087000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.087000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.087000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.087000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.087000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.087000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.087000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.087000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.087000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.087000 audit: BPF prog-id=30 op=LOAD Sep 6 00:05:01.087000 audit[4576]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc9fc8e88 a2=40 a3=ffffc9fc8eb8 items=0 ppid=4211 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:01.087000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:05:01.092000 audit: BPF prog-id=30 op=UNLOAD Sep 6 00:05:01.092000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.092000 audit[4576]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffc9fc8fa0 a2=50 a3=0 items=0 ppid=4211 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:01.092000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:05:01.130000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.130000 audit[4576]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc9fc8ef8 a2=28 a3=ffffc9fc9028 items=0 ppid=4211 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:01.130000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:05:01.133000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.133000 audit[4576]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc9fc8f28 a2=28 a3=ffffc9fc9058 items=0 ppid=4211 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:01.133000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:05:01.133000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.133000 audit[4576]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc9fc8dd8 a2=28 a3=ffffc9fc8f08 items=0 ppid=4211 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:01.133000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:05:01.133000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.133000 audit[4576]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc9fc8f48 a2=28 a3=ffffc9fc9078 items=0 ppid=4211 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:01.133000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:05:01.133000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.133000 audit[4576]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc9fc8f28 a2=28 a3=ffffc9fc9058 items=0 ppid=4211 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:01.133000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:05:01.133000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.133000 audit[4576]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc9fc8f18 a2=28 a3=ffffc9fc9048 items=0 ppid=4211 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:01.133000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:05:01.133000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.133000 audit[4576]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc9fc8f48 a2=28 a3=ffffc9fc9078 items=0 ppid=4211 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:01.133000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:05:01.133000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.133000 audit[4576]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc9fc8f28 a2=28 a3=ffffc9fc9058 items=0 ppid=4211 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:01.133000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:05:01.133000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.133000 audit[4576]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc9fc8f48 a2=28 a3=ffffc9fc9078 items=0 ppid=4211 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:01.133000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:05:01.133000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.133000 audit[4576]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc9fc8f18 a2=28 a3=ffffc9fc9048 items=0 ppid=4211 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:01.133000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:05:01.133000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.133000 audit[4576]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc9fc8f98 a2=28 a3=ffffc9fc90d8 items=0 ppid=4211 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:01.133000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:05:01.134000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.134000 audit[4576]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffc9fc8cd0 a2=50 a3=0 items=0 ppid=4211 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:01.134000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:05:01.134000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.134000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.134000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.134000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.134000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.134000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.134000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.134000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.134000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.134000 audit: BPF prog-id=31 op=LOAD Sep 6 00:05:01.134000 audit[4576]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc9fc8cd8 a2=94 a3=5 items=0 ppid=4211 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:01.134000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:05:01.134000 audit: BPF prog-id=31 op=UNLOAD Sep 6 00:05:01.134000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.134000 audit[4576]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffc9fc8de0 a2=50 a3=0 items=0 ppid=4211 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:01.134000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:05:01.134000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.134000 audit[4576]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffc9fc8f28 a2=4 a3=3 items=0 ppid=4211 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:01.134000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:05:01.134000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.134000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.134000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.134000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.134000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.134000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.134000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.134000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.134000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.134000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.134000 audit[4576]: AVC avc: denied { confidentiality } for pid=4576 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 00:05:01.134000 audit[4576]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffc9fc8f08 a2=94 a3=6 items=0 ppid=4211 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:01.134000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:05:01.135000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.135000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.135000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.135000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.135000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.135000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.135000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.135000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.135000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.135000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.135000 audit[4576]: AVC avc: denied { confidentiality } for pid=4576 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 00:05:01.135000 audit[4576]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffc9fc86d8 a2=94 a3=83 items=0 ppid=4211 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:01.135000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:05:01.135000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.135000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.135000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.135000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.135000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.135000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.135000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.135000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.135000 audit[4576]: AVC avc: denied { perfmon } for pid=4576 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.135000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.135000 audit[4576]: AVC avc: denied { confidentiality } for pid=4576 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 00:05:01.135000 audit[4576]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffc9fc86d8 a2=94 a3=83 items=0 ppid=4211 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:01.135000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:05:01.136000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.136000 audit[4576]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffc9fca118 a2=10 a3=ffffc9fca208 items=0 ppid=4211 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:01.136000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:05:01.137000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.137000 audit[4576]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffc9fc9fd8 a2=10 a3=ffffc9fca0c8 items=0 ppid=4211 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:01.137000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:05:01.139000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.139000 audit[4576]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffc9fc9f48 a2=10 a3=ffffc9fca0c8 items=0 ppid=4211 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:01.139000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:05:01.139000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 00:05:01.139000 audit[4576]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffc9fc9f48 a2=10 a3=ffffc9fca0c8 items=0 ppid=4211 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:01.139000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 00:05:01.147000 audit: BPF prog-id=26 op=UNLOAD Sep 6 00:05:01.245522 env[1848]: time="2025-09-06T00:05:01.245381735Z" level=info msg="StopPodSandbox for \"ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b\"" Sep 6 00:05:01.329909 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali8aa86911e1a: link becomes ready Sep 6 00:05:01.329034 systemd-networkd[1511]: cali8aa86911e1a: Link UP Sep 6 00:05:01.329526 systemd-networkd[1511]: cali8aa86911e1a: Gained carrier Sep 6 00:05:01.379056 env[1848]: 2025-09-06 00:05:01.004 [INFO][4598] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--zgcsm-eth0 calico-apiserver-797f9c6c85- calico-apiserver d3def65b-39c6-44ff-8623-e889bb6d6c02 934 0 2025-09-06 00:04:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:797f9c6c85 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-24-61 calico-apiserver-797f9c6c85-zgcsm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8aa86911e1a [] [] }} ContainerID="539494acd0258f9df9c9eb730289471e341c61dd2c24ad477092a1afc3ae12a1" Namespace="calico-apiserver" Pod="calico-apiserver-797f9c6c85-zgcsm" WorkloadEndpoint="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--zgcsm-" Sep 6 00:05:01.379056 env[1848]: 2025-09-06 00:05:01.004 [INFO][4598] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="539494acd0258f9df9c9eb730289471e341c61dd2c24ad477092a1afc3ae12a1" Namespace="calico-apiserver" Pod="calico-apiserver-797f9c6c85-zgcsm" WorkloadEndpoint="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--zgcsm-eth0" Sep 6 00:05:01.379056 env[1848]: 2025-09-06 00:05:01.204 [INFO][4646] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="539494acd0258f9df9c9eb730289471e341c61dd2c24ad477092a1afc3ae12a1" HandleID="k8s-pod-network.539494acd0258f9df9c9eb730289471e341c61dd2c24ad477092a1afc3ae12a1" Workload="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--zgcsm-eth0" Sep 6 00:05:01.379056 env[1848]: 2025-09-06 00:05:01.204 [INFO][4646] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="539494acd0258f9df9c9eb730289471e341c61dd2c24ad477092a1afc3ae12a1" HandleID="k8s-pod-network.539494acd0258f9df9c9eb730289471e341c61dd2c24ad477092a1afc3ae12a1" Workload="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--zgcsm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ca850), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-24-61", "pod":"calico-apiserver-797f9c6c85-zgcsm", "timestamp":"2025-09-06 00:05:01.204232965 +0000 UTC"}, Hostname:"ip-172-31-24-61", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:05:01.379056 env[1848]: 2025-09-06 00:05:01.205 [INFO][4646] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:05:01.379056 env[1848]: 2025-09-06 00:05:01.205 [INFO][4646] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:05:01.379056 env[1848]: 2025-09-06 00:05:01.205 [INFO][4646] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-61' Sep 6 00:05:01.379056 env[1848]: 2025-09-06 00:05:01.223 [INFO][4646] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.539494acd0258f9df9c9eb730289471e341c61dd2c24ad477092a1afc3ae12a1" host="ip-172-31-24-61" Sep 6 00:05:01.379056 env[1848]: 2025-09-06 00:05:01.247 [INFO][4646] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-61" Sep 6 00:05:01.379056 env[1848]: 2025-09-06 00:05:01.259 [INFO][4646] ipam/ipam.go 511: Trying affinity for 192.168.40.128/26 host="ip-172-31-24-61" Sep 6 00:05:01.379056 env[1848]: 2025-09-06 00:05:01.264 [INFO][4646] ipam/ipam.go 158: Attempting to load block cidr=192.168.40.128/26 host="ip-172-31-24-61" Sep 6 00:05:01.379056 env[1848]: 2025-09-06 00:05:01.270 [INFO][4646] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.40.128/26 host="ip-172-31-24-61" Sep 6 00:05:01.379056 env[1848]: 2025-09-06 00:05:01.271 [INFO][4646] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.40.128/26 handle="k8s-pod-network.539494acd0258f9df9c9eb730289471e341c61dd2c24ad477092a1afc3ae12a1" host="ip-172-31-24-61" Sep 6 00:05:01.379056 env[1848]: 2025-09-06 00:05:01.275 [INFO][4646] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.539494acd0258f9df9c9eb730289471e341c61dd2c24ad477092a1afc3ae12a1 Sep 6 00:05:01.379056 env[1848]: 2025-09-06 00:05:01.289 [INFO][4646] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.40.128/26 handle="k8s-pod-network.539494acd0258f9df9c9eb730289471e341c61dd2c24ad477092a1afc3ae12a1" host="ip-172-31-24-61" Sep 6 00:05:01.379056 env[1848]: 2025-09-06 00:05:01.302 [INFO][4646] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.40.131/26] block=192.168.40.128/26 handle="k8s-pod-network.539494acd0258f9df9c9eb730289471e341c61dd2c24ad477092a1afc3ae12a1" host="ip-172-31-24-61" Sep 6 00:05:01.379056 env[1848]: 2025-09-06 00:05:01.302 [INFO][4646] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.40.131/26] handle="k8s-pod-network.539494acd0258f9df9c9eb730289471e341c61dd2c24ad477092a1afc3ae12a1" host="ip-172-31-24-61" Sep 6 00:05:01.379056 env[1848]: 2025-09-06 00:05:01.303 [INFO][4646] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:05:01.379056 env[1848]: 2025-09-06 00:05:01.303 [INFO][4646] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.40.131/26] IPv6=[] ContainerID="539494acd0258f9df9c9eb730289471e341c61dd2c24ad477092a1afc3ae12a1" HandleID="k8s-pod-network.539494acd0258f9df9c9eb730289471e341c61dd2c24ad477092a1afc3ae12a1" Workload="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--zgcsm-eth0" Sep 6 00:05:01.384739 env[1848]: 2025-09-06 00:05:01.319 [INFO][4598] cni-plugin/k8s.go 418: Populated endpoint ContainerID="539494acd0258f9df9c9eb730289471e341c61dd2c24ad477092a1afc3ae12a1" Namespace="calico-apiserver" Pod="calico-apiserver-797f9c6c85-zgcsm" WorkloadEndpoint="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--zgcsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--zgcsm-eth0", GenerateName:"calico-apiserver-797f9c6c85-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3def65b-39c6-44ff-8623-e889bb6d6c02", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 4, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"797f9c6c85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-61", ContainerID:"", Pod:"calico-apiserver-797f9c6c85-zgcsm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8aa86911e1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:05:01.384739 env[1848]: 2025-09-06 00:05:01.319 [INFO][4598] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.131/32] ContainerID="539494acd0258f9df9c9eb730289471e341c61dd2c24ad477092a1afc3ae12a1" Namespace="calico-apiserver" Pod="calico-apiserver-797f9c6c85-zgcsm" WorkloadEndpoint="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--zgcsm-eth0" Sep 6 00:05:01.384739 env[1848]: 2025-09-06 00:05:01.319 [INFO][4598] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8aa86911e1a ContainerID="539494acd0258f9df9c9eb730289471e341c61dd2c24ad477092a1afc3ae12a1" Namespace="calico-apiserver" Pod="calico-apiserver-797f9c6c85-zgcsm" WorkloadEndpoint="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--zgcsm-eth0" Sep 6 00:05:01.384739 env[1848]: 2025-09-06 00:05:01.341 [INFO][4598] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="539494acd0258f9df9c9eb730289471e341c61dd2c24ad477092a1afc3ae12a1" Namespace="calico-apiserver" Pod="calico-apiserver-797f9c6c85-zgcsm" WorkloadEndpoint="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--zgcsm-eth0" Sep 6 00:05:01.384739 env[1848]: 2025-09-06 00:05:01.342 [INFO][4598] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="539494acd0258f9df9c9eb730289471e341c61dd2c24ad477092a1afc3ae12a1" Namespace="calico-apiserver" Pod="calico-apiserver-797f9c6c85-zgcsm" WorkloadEndpoint="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--zgcsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--zgcsm-eth0", GenerateName:"calico-apiserver-797f9c6c85-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3def65b-39c6-44ff-8623-e889bb6d6c02", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 4, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"797f9c6c85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-61", ContainerID:"539494acd0258f9df9c9eb730289471e341c61dd2c24ad477092a1afc3ae12a1", Pod:"calico-apiserver-797f9c6c85-zgcsm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8aa86911e1a", MAC:"92:4f:94:c8:87:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:05:01.384739 env[1848]: 2025-09-06 00:05:01.367 [INFO][4598] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="539494acd0258f9df9c9eb730289471e341c61dd2c24ad477092a1afc3ae12a1" Namespace="calico-apiserver" Pod="calico-apiserver-797f9c6c85-zgcsm" WorkloadEndpoint="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--zgcsm-eth0" Sep 6 00:05:01.413452 systemd-networkd[1511]: vxlan.calico: Gained IPv6LL Sep 6 00:05:01.484466 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif2f4050e29f: link becomes ready Sep 6 00:05:01.492444 systemd-networkd[1511]: calif2f4050e29f: Link UP Sep 6 00:05:01.494279 systemd-networkd[1511]: calif2f4050e29f: Gained carrier Sep 6 00:05:01.530845 env[1848]: 2025-09-06 00:05:00.983 [INFO][4617] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--61-k8s-coredns--7c65d6cfc9--mt5pq-eth0 coredns-7c65d6cfc9- kube-system 695d8e71-3eef-4ad5-94de-576fc3ef9397 932 0 2025-09-06 00:04:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-24-61 coredns-7c65d6cfc9-mt5pq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif2f4050e29f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fe8be527f5c5bb78aa52c570e1c695036ca9f75a09e1238c516a619768d2d701" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mt5pq" WorkloadEndpoint="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--mt5pq-" Sep 6 00:05:01.530845 env[1848]: 2025-09-06 00:05:00.983 [INFO][4617] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fe8be527f5c5bb78aa52c570e1c695036ca9f75a09e1238c516a619768d2d701" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mt5pq" WorkloadEndpoint="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--mt5pq-eth0" Sep 6 00:05:01.530845 env[1848]: 2025-09-06 00:05:01.244 [INFO][4641] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fe8be527f5c5bb78aa52c570e1c695036ca9f75a09e1238c516a619768d2d701" HandleID="k8s-pod-network.fe8be527f5c5bb78aa52c570e1c695036ca9f75a09e1238c516a619768d2d701" Workload="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--mt5pq-eth0" Sep 6 00:05:01.530845 env[1848]: 2025-09-06 00:05:01.246 [INFO][4641] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fe8be527f5c5bb78aa52c570e1c695036ca9f75a09e1238c516a619768d2d701" HandleID="k8s-pod-network.fe8be527f5c5bb78aa52c570e1c695036ca9f75a09e1238c516a619768d2d701" Workload="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--mt5pq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002aa140), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-24-61", "pod":"coredns-7c65d6cfc9-mt5pq", "timestamp":"2025-09-06 00:05:01.236573925 +0000 UTC"}, Hostname:"ip-172-31-24-61", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:05:01.530845 env[1848]: 2025-09-06 00:05:01.246 [INFO][4641] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:05:01.530845 env[1848]: 2025-09-06 00:05:01.303 [INFO][4641] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:05:01.530845 env[1848]: 2025-09-06 00:05:01.303 [INFO][4641] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-61' Sep 6 00:05:01.530845 env[1848]: 2025-09-06 00:05:01.366 [INFO][4641] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fe8be527f5c5bb78aa52c570e1c695036ca9f75a09e1238c516a619768d2d701" host="ip-172-31-24-61" Sep 6 00:05:01.530845 env[1848]: 2025-09-06 00:05:01.378 [INFO][4641] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-61" Sep 6 00:05:01.530845 env[1848]: 2025-09-06 00:05:01.401 [INFO][4641] ipam/ipam.go 511: Trying affinity for 192.168.40.128/26 host="ip-172-31-24-61" Sep 6 00:05:01.530845 env[1848]: 2025-09-06 00:05:01.408 [INFO][4641] ipam/ipam.go 158: Attempting to load block cidr=192.168.40.128/26 host="ip-172-31-24-61" Sep 6 00:05:01.530845 env[1848]: 2025-09-06 00:05:01.418 [INFO][4641] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.40.128/26 host="ip-172-31-24-61" Sep 6 00:05:01.530845 env[1848]: 2025-09-06 00:05:01.418 [INFO][4641] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.40.128/26 handle="k8s-pod-network.fe8be527f5c5bb78aa52c570e1c695036ca9f75a09e1238c516a619768d2d701" host="ip-172-31-24-61" Sep 6 00:05:01.530845 env[1848]: 2025-09-06 00:05:01.431 [INFO][4641] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fe8be527f5c5bb78aa52c570e1c695036ca9f75a09e1238c516a619768d2d701 Sep 6 00:05:01.530845 env[1848]: 2025-09-06 00:05:01.445 [INFO][4641] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.40.128/26 handle="k8s-pod-network.fe8be527f5c5bb78aa52c570e1c695036ca9f75a09e1238c516a619768d2d701" host="ip-172-31-24-61" Sep 6 00:05:01.530845 env[1848]: 2025-09-06 00:05:01.464 [INFO][4641] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.40.132/26] block=192.168.40.128/26 handle="k8s-pod-network.fe8be527f5c5bb78aa52c570e1c695036ca9f75a09e1238c516a619768d2d701" host="ip-172-31-24-61" Sep 6 00:05:01.530845 env[1848]: 2025-09-06 00:05:01.464 [INFO][4641] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.40.132/26] handle="k8s-pod-network.fe8be527f5c5bb78aa52c570e1c695036ca9f75a09e1238c516a619768d2d701" host="ip-172-31-24-61" Sep 6 00:05:01.530845 env[1848]: 2025-09-06 00:05:01.464 [INFO][4641] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:05:01.530845 env[1848]: 2025-09-06 00:05:01.464 [INFO][4641] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.40.132/26] IPv6=[] ContainerID="fe8be527f5c5bb78aa52c570e1c695036ca9f75a09e1238c516a619768d2d701" HandleID="k8s-pod-network.fe8be527f5c5bb78aa52c570e1c695036ca9f75a09e1238c516a619768d2d701" Workload="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--mt5pq-eth0" Sep 6 00:05:01.532815 env[1848]: 2025-09-06 00:05:01.477 [INFO][4617] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fe8be527f5c5bb78aa52c570e1c695036ca9f75a09e1238c516a619768d2d701" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mt5pq" WorkloadEndpoint="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--mt5pq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--61-k8s-coredns--7c65d6cfc9--mt5pq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"695d8e71-3eef-4ad5-94de-576fc3ef9397", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 4, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-61", ContainerID:"", Pod:"coredns-7c65d6cfc9-mt5pq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2f4050e29f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:05:01.532815 env[1848]: 2025-09-06 00:05:01.477 [INFO][4617] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.132/32] ContainerID="fe8be527f5c5bb78aa52c570e1c695036ca9f75a09e1238c516a619768d2d701" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mt5pq" WorkloadEndpoint="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--mt5pq-eth0" Sep 6 00:05:01.532815 env[1848]: 2025-09-06 00:05:01.477 [INFO][4617] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2f4050e29f ContainerID="fe8be527f5c5bb78aa52c570e1c695036ca9f75a09e1238c516a619768d2d701" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mt5pq" WorkloadEndpoint="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--mt5pq-eth0" Sep 6 00:05:01.532815 env[1848]: 2025-09-06 00:05:01.484 [INFO][4617] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fe8be527f5c5bb78aa52c570e1c695036ca9f75a09e1238c516a619768d2d701" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mt5pq" WorkloadEndpoint="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--mt5pq-eth0" Sep 6 00:05:01.532815 env[1848]: 2025-09-06 00:05:01.484 [INFO][4617] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fe8be527f5c5bb78aa52c570e1c695036ca9f75a09e1238c516a619768d2d701" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mt5pq" WorkloadEndpoint="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--mt5pq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--61-k8s-coredns--7c65d6cfc9--mt5pq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"695d8e71-3eef-4ad5-94de-576fc3ef9397", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 4, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-61", ContainerID:"fe8be527f5c5bb78aa52c570e1c695036ca9f75a09e1238c516a619768d2d701", Pod:"coredns-7c65d6cfc9-mt5pq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2f4050e29f", MAC:"fa:b9:25:f2:a7:12", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:05:01.532815 env[1848]: 2025-09-06 00:05:01.515 [INFO][4617] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fe8be527f5c5bb78aa52c570e1c695036ca9f75a09e1238c516a619768d2d701" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mt5pq" WorkloadEndpoint="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--mt5pq-eth0" Sep 6 00:05:01.540000 audit[4712]: NETFILTER_CFG table=mangle:103 family=2 entries=16 op=nft_register_chain pid=4712 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:05:01.540000 audit[4712]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffdcbb60f0 a2=0 a3=ffffb61ebfa8 items=0 ppid=4211 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:01.540000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:05:01.560000 audit[4711]: NETFILTER_CFG table=nat:104 family=2 entries=15 op=nft_register_chain pid=4711 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:05:01.560000 audit[4711]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffd4ddc660 a2=0 a3=ffff88aa2fa8 items=0 ppid=4211 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:01.560000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:05:01.589000 audit[4710]: NETFILTER_CFG table=raw:105 family=2 entries=21 op=nft_register_chain pid=4710 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:05:01.589000 audit[4710]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffe57d3450 a2=0 a3=ffff8ac51fa8 items=0 ppid=4211 pid=4710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:01.589000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:05:01.612773 env[1848]: time="2025-09-06T00:05:01.607781217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:05:01.612773 env[1848]: time="2025-09-06T00:05:01.607863772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:05:01.612773 env[1848]: time="2025-09-06T00:05:01.607891970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:05:01.612773 env[1848]: time="2025-09-06T00:05:01.608150386Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/539494acd0258f9df9c9eb730289471e341c61dd2c24ad477092a1afc3ae12a1 pid=4723 runtime=io.containerd.runc.v2 Sep 6 00:05:01.610000 audit[4727]: NETFILTER_CFG table=filter:106 family=2 entries=130 op=nft_register_chain pid=4727 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:05:01.610000 audit[4727]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=75396 a0=3 a1=ffffcf1f6f50 a2=0 a3=ffff9dcd4fa8 items=0 ppid=4211 pid=4727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:01.610000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:05:01.768147 systemd-networkd[1511]: cali7406f9a9a1a: Link UP Sep 6 00:05:01.775140 systemd-networkd[1511]: cali7406f9a9a1a: Gained carrier Sep 6 00:05:01.775364 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7406f9a9a1a: link becomes ready Sep 6 00:05:01.799913 env[1848]: time="2025-09-06T00:05:01.792116142Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:05:01.799913 env[1848]: time="2025-09-06T00:05:01.792848827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:05:01.799913 env[1848]: time="2025-09-06T00:05:01.792993574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:05:01.799913 env[1848]: time="2025-09-06T00:05:01.795557798Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe8be527f5c5bb78aa52c570e1c695036ca9f75a09e1238c516a619768d2d701 pid=4767 runtime=io.containerd.runc.v2 Sep 6 00:05:01.820000 audit[4773]: NETFILTER_CFG table=filter:107 family=2 entries=94 op=nft_register_chain pid=4773 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:05:01.822638 env[1848]: 2025-09-06 00:05:01.048 [INFO][4608] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--61-k8s-coredns--7c65d6cfc9--dz8br-eth0 coredns-7c65d6cfc9- kube-system 4dca1103-4d65-4dd5-9f1d-b159dc97b5c8 928 0 2025-09-06 00:04:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-24-61 coredns-7c65d6cfc9-dz8br eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7406f9a9a1a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="24ce57417410d5c5b7c7abde6985213050cbe3fa33dbc94cb81f7a854a7f0b0d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dz8br" WorkloadEndpoint="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--dz8br-" Sep 6 00:05:01.822638 env[1848]: 2025-09-06 00:05:01.049 [INFO][4608] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="24ce57417410d5c5b7c7abde6985213050cbe3fa33dbc94cb81f7a854a7f0b0d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dz8br" WorkloadEndpoint="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--dz8br-eth0" Sep 6 00:05:01.822638 env[1848]: 2025-09-06 00:05:01.376 [INFO][4652] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24ce57417410d5c5b7c7abde6985213050cbe3fa33dbc94cb81f7a854a7f0b0d" HandleID="k8s-pod-network.24ce57417410d5c5b7c7abde6985213050cbe3fa33dbc94cb81f7a854a7f0b0d" Workload="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--dz8br-eth0" Sep 6 00:05:01.822638 env[1848]: 2025-09-06 00:05:01.376 [INFO][4652] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="24ce57417410d5c5b7c7abde6985213050cbe3fa33dbc94cb81f7a854a7f0b0d" HandleID="k8s-pod-network.24ce57417410d5c5b7c7abde6985213050cbe3fa33dbc94cb81f7a854a7f0b0d" Workload="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--dz8br-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000313bc0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-24-61", "pod":"coredns-7c65d6cfc9-dz8br", "timestamp":"2025-09-06 00:05:01.3759407 +0000 UTC"}, Hostname:"ip-172-31-24-61", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:05:01.822638 env[1848]: 2025-09-06 00:05:01.376 [INFO][4652] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:05:01.822638 env[1848]: 2025-09-06 00:05:01.464 [INFO][4652] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:05:01.822638 env[1848]: 2025-09-06 00:05:01.464 [INFO][4652] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-61' Sep 6 00:05:01.822638 env[1848]: 2025-09-06 00:05:01.511 [INFO][4652] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.24ce57417410d5c5b7c7abde6985213050cbe3fa33dbc94cb81f7a854a7f0b0d" host="ip-172-31-24-61" Sep 6 00:05:01.822638 env[1848]: 2025-09-06 00:05:01.526 [INFO][4652] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-61" Sep 6 00:05:01.822638 env[1848]: 2025-09-06 00:05:01.548 [INFO][4652] ipam/ipam.go 511: Trying affinity for 192.168.40.128/26 host="ip-172-31-24-61" Sep 6 00:05:01.822638 env[1848]: 2025-09-06 00:05:01.553 [INFO][4652] ipam/ipam.go 158: Attempting to load block cidr=192.168.40.128/26 host="ip-172-31-24-61" Sep 6 00:05:01.822638 env[1848]: 2025-09-06 00:05:01.568 [INFO][4652] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.40.128/26 host="ip-172-31-24-61" Sep 6 00:05:01.822638 env[1848]: 2025-09-06 00:05:01.571 [INFO][4652] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.40.128/26 handle="k8s-pod-network.24ce57417410d5c5b7c7abde6985213050cbe3fa33dbc94cb81f7a854a7f0b0d" host="ip-172-31-24-61" Sep 6 00:05:01.822638 env[1848]: 2025-09-06 00:05:01.595 [INFO][4652] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.24ce57417410d5c5b7c7abde6985213050cbe3fa33dbc94cb81f7a854a7f0b0d Sep 6 00:05:01.822638 env[1848]: 2025-09-06 00:05:01.633 [INFO][4652] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.40.128/26 handle="k8s-pod-network.24ce57417410d5c5b7c7abde6985213050cbe3fa33dbc94cb81f7a854a7f0b0d" host="ip-172-31-24-61" Sep 6 00:05:01.822638 env[1848]: 2025-09-06 00:05:01.655 [INFO][4652] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.40.133/26] block=192.168.40.128/26 handle="k8s-pod-network.24ce57417410d5c5b7c7abde6985213050cbe3fa33dbc94cb81f7a854a7f0b0d" host="ip-172-31-24-61" Sep 6 00:05:01.822638 env[1848]: 2025-09-06 00:05:01.656 [INFO][4652] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.40.133/26] handle="k8s-pod-network.24ce57417410d5c5b7c7abde6985213050cbe3fa33dbc94cb81f7a854a7f0b0d" host="ip-172-31-24-61" Sep 6 00:05:01.822638 env[1848]: 2025-09-06 00:05:01.656 [INFO][4652] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:05:01.822638 env[1848]: 2025-09-06 00:05:01.656 [INFO][4652] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.40.133/26] IPv6=[] ContainerID="24ce57417410d5c5b7c7abde6985213050cbe3fa33dbc94cb81f7a854a7f0b0d" HandleID="k8s-pod-network.24ce57417410d5c5b7c7abde6985213050cbe3fa33dbc94cb81f7a854a7f0b0d" Workload="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--dz8br-eth0" Sep 6 00:05:01.820000 audit[4773]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=51900 a0=3 a1=fffff20b8550 a2=0 a3=ffff9f058fa8 items=0 ppid=4211 pid=4773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:01.820000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:05:01.824385 env[1848]: 2025-09-06 00:05:01.690 [INFO][4608] cni-plugin/k8s.go 418: Populated endpoint ContainerID="24ce57417410d5c5b7c7abde6985213050cbe3fa33dbc94cb81f7a854a7f0b0d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dz8br" WorkloadEndpoint="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--dz8br-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--61-k8s-coredns--7c65d6cfc9--dz8br-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4dca1103-4d65-4dd5-9f1d-b159dc97b5c8", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 4, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-61", ContainerID:"", Pod:"coredns-7c65d6cfc9-dz8br", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7406f9a9a1a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:05:01.824385 env[1848]: 2025-09-06 00:05:01.699 [INFO][4608] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.133/32] ContainerID="24ce57417410d5c5b7c7abde6985213050cbe3fa33dbc94cb81f7a854a7f0b0d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dz8br" WorkloadEndpoint="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--dz8br-eth0" Sep 6 00:05:01.824385 env[1848]: 2025-09-06 00:05:01.699 [INFO][4608] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7406f9a9a1a ContainerID="24ce57417410d5c5b7c7abde6985213050cbe3fa33dbc94cb81f7a854a7f0b0d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dz8br" WorkloadEndpoint="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--dz8br-eth0" Sep 6 00:05:01.824385 env[1848]: 2025-09-06 00:05:01.771 [INFO][4608] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="24ce57417410d5c5b7c7abde6985213050cbe3fa33dbc94cb81f7a854a7f0b0d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dz8br" WorkloadEndpoint="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--dz8br-eth0" Sep 6 00:05:01.824385 env[1848]: 2025-09-06 00:05:01.777 [INFO][4608] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="24ce57417410d5c5b7c7abde6985213050cbe3fa33dbc94cb81f7a854a7f0b0d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dz8br" WorkloadEndpoint="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--dz8br-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--61-k8s-coredns--7c65d6cfc9--dz8br-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4dca1103-4d65-4dd5-9f1d-b159dc97b5c8", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 4, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-61", ContainerID:"24ce57417410d5c5b7c7abde6985213050cbe3fa33dbc94cb81f7a854a7f0b0d", Pod:"coredns-7c65d6cfc9-dz8br", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7406f9a9a1a", MAC:"46:51:50:df:9f:0f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:05:01.824385 env[1848]: 2025-09-06 00:05:01.805 [INFO][4608] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="24ce57417410d5c5b7c7abde6985213050cbe3fa33dbc94cb81f7a854a7f0b0d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dz8br" WorkloadEndpoint="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--dz8br-eth0" Sep 6 00:05:01.906000 audit[4797]: NETFILTER_CFG table=filter:108 family=2 entries=40 op=nft_register_chain pid=4797 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:05:01.906000 audit[4797]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20328 a0=3 a1=ffffc78dc230 a2=0 a3=ffffb840afa8 items=0 ppid=4211 pid=4797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:01.906000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:05:02.004599 systemd[1]: run-containerd-runc-k8s.io-fe8be527f5c5bb78aa52c570e1c695036ca9f75a09e1238c516a619768d2d701-runc.a7hUly.mount: Deactivated successfully. Sep 6 00:05:02.041404 env[1848]: time="2025-09-06T00:05:02.040171279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:05:02.041404 env[1848]: time="2025-09-06T00:05:02.040550493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:05:02.041404 env[1848]: time="2025-09-06T00:05:02.040624660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:05:02.041404 env[1848]: time="2025-09-06T00:05:02.041173340Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/24ce57417410d5c5b7c7abde6985213050cbe3fa33dbc94cb81f7a854a7f0b0d pid=4818 runtime=io.containerd.runc.v2 Sep 6 00:05:02.123863 env[1848]: time="2025-09-06T00:05:02.123657493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-797f9c6c85-zgcsm,Uid:d3def65b-39c6-44ff-8623-e889bb6d6c02,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"539494acd0258f9df9c9eb730289471e341c61dd2c24ad477092a1afc3ae12a1\"" Sep 6 00:05:02.260262 env[1848]: time="2025-09-06T00:05:02.257607331Z" level=info msg="StopPodSandbox for \"84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb\"" Sep 6 00:05:02.279065 env[1848]: time="2025-09-06T00:05:02.278470011Z" level=info msg="StopPodSandbox for \"809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975\"" Sep 6 00:05:02.326734 env[1848]: time="2025-09-06T00:05:02.326070269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mt5pq,Uid:695d8e71-3eef-4ad5-94de-576fc3ef9397,Namespace:kube-system,Attempt:1,} returns sandbox id \"fe8be527f5c5bb78aa52c570e1c695036ca9f75a09e1238c516a619768d2d701\"" Sep 6 00:05:02.418602 env[1848]: time="2025-09-06T00:05:02.418520226Z" level=info msg="CreateContainer within sandbox \"fe8be527f5c5bb78aa52c570e1c695036ca9f75a09e1238c516a619768d2d701\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:05:02.419130 env[1848]: time="2025-09-06T00:05:02.418827600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dz8br,Uid:4dca1103-4d65-4dd5-9f1d-b159dc97b5c8,Namespace:kube-system,Attempt:1,} returns sandbox id \"24ce57417410d5c5b7c7abde6985213050cbe3fa33dbc94cb81f7a854a7f0b0d\"" Sep 6 00:05:02.488061 env[1848]: time="2025-09-06T00:05:02.487567950Z" level=info msg="CreateContainer within sandbox \"24ce57417410d5c5b7c7abde6985213050cbe3fa33dbc94cb81f7a854a7f0b0d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:05:02.511744 env[1848]: 2025-09-06 00:05:01.956 [INFO][4695] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" Sep 6 00:05:02.511744 env[1848]: 2025-09-06 00:05:01.959 [INFO][4695] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" iface="eth0" netns="/var/run/netns/cni-c25cadac-36b2-7723-0a9e-079ce4977c23" Sep 6 00:05:02.511744 env[1848]: 2025-09-06 00:05:01.961 [INFO][4695] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" iface="eth0" netns="/var/run/netns/cni-c25cadac-36b2-7723-0a9e-079ce4977c23" Sep 6 00:05:02.511744 env[1848]: 2025-09-06 00:05:01.962 [INFO][4695] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" iface="eth0" netns="/var/run/netns/cni-c25cadac-36b2-7723-0a9e-079ce4977c23" Sep 6 00:05:02.511744 env[1848]: 2025-09-06 00:05:01.964 [INFO][4695] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" Sep 6 00:05:02.511744 env[1848]: 2025-09-06 00:05:01.964 [INFO][4695] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" Sep 6 00:05:02.511744 env[1848]: 2025-09-06 00:05:02.405 [INFO][4817] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" HandleID="k8s-pod-network.ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" Workload="ip--172--31--24--61-k8s-csi--node--driver--zpqph-eth0" Sep 6 00:05:02.511744 env[1848]: 2025-09-06 00:05:02.406 [INFO][4817] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:05:02.511744 env[1848]: 2025-09-06 00:05:02.406 [INFO][4817] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:05:02.511744 env[1848]: 2025-09-06 00:05:02.465 [WARNING][4817] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" HandleID="k8s-pod-network.ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" Workload="ip--172--31--24--61-k8s-csi--node--driver--zpqph-eth0" Sep 6 00:05:02.511744 env[1848]: 2025-09-06 00:05:02.467 [INFO][4817] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" HandleID="k8s-pod-network.ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" Workload="ip--172--31--24--61-k8s-csi--node--driver--zpqph-eth0" Sep 6 00:05:02.511744 env[1848]: 2025-09-06 00:05:02.473 [INFO][4817] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:05:02.511744 env[1848]: 2025-09-06 00:05:02.505 [INFO][4695] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" Sep 6 00:05:02.541868 env[1848]: time="2025-09-06T00:05:02.539258057Z" level=info msg="TearDown network for sandbox \"ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b\" successfully" Sep 6 00:05:02.541868 env[1848]: time="2025-09-06T00:05:02.539605605Z" level=info msg="StopPodSandbox for \"ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b\" returns successfully" Sep 6 00:05:02.545421 env[1848]: time="2025-09-06T00:05:02.545356071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zpqph,Uid:8f59dc9d-44f6-4633-8546-11f2219b7da2,Namespace:calico-system,Attempt:1,}" Sep 6 00:05:02.561237 env[1848]: time="2025-09-06T00:05:02.561150864Z" level=info msg="CreateContainer within sandbox \"fe8be527f5c5bb78aa52c570e1c695036ca9f75a09e1238c516a619768d2d701\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d1d9d9fd6c9d106ce00c2ae97d7f7c12c9cbee8fcea11f88cfe2a2ddbd4ea76c\"" Sep 6 00:05:02.565400 env[1848]: time="2025-09-06T00:05:02.565320808Z" level=info msg="StartContainer for \"d1d9d9fd6c9d106ce00c2ae97d7f7c12c9cbee8fcea11f88cfe2a2ddbd4ea76c\"" Sep 6 00:05:02.614482 env[1848]: time="2025-09-06T00:05:02.614406050Z" level=info msg="CreateContainer within sandbox \"24ce57417410d5c5b7c7abde6985213050cbe3fa33dbc94cb81f7a854a7f0b0d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f17f3eb163a2eb102e5aeb3780e552bfb0d5fdc1fcba446b6584c4bf25e5e07a\"" Sep 6 00:05:02.625009 systemd[1]: run-netns-cni\x2dc25cadac\x2d36b2\x2d7723\x2d0a9e\x2d079ce4977c23.mount: Deactivated successfully. Sep 6 00:05:02.647140 env[1848]: time="2025-09-06T00:05:02.635506112Z" level=info msg="StartContainer for \"f17f3eb163a2eb102e5aeb3780e552bfb0d5fdc1fcba446b6584c4bf25e5e07a\"" Sep 6 00:05:02.758560 systemd-networkd[1511]: cali8aa86911e1a: Gained IPv6LL Sep 6 00:05:02.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.24.61:22-147.75.109.163:54596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:02.904455 systemd[1]: Started sshd@7-172.31.24.61:22-147.75.109.163:54596.service. Sep 6 00:05:03.077413 systemd-networkd[1511]: cali7406f9a9a1a: Gained IPv6LL Sep 6 00:05:03.141423 systemd-networkd[1511]: calif2f4050e29f: Gained IPv6LL Sep 6 00:05:03.189950 kernel: kauditd_printk_skb: 554 callbacks suppressed Sep 6 00:05:03.190122 kernel: audit: type=1101 audit(1757117103.177:420): pid=4958 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:03.177000 audit[4958]: USER_ACCT pid=4958 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:03.190722 sshd[4958]: Accepted publickey for core from 147.75.109.163 port 54596 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:05:03.194777 sshd[4958]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:05:03.207162 kernel: audit: type=1103 audit(1757117103.192:421): pid=4958 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:03.192000 audit[4958]: CRED_ACQ pid=4958 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:03.224533 kernel: audit: type=1006 audit(1757117103.192:422): pid=4958 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Sep 6 00:05:03.226955 systemd[1]: Started session-8.scope. Sep 6 00:05:03.227435 systemd-logind[1837]: New session 8 of user core. Sep 6 00:05:03.249916 kernel: audit: type=1300 audit(1757117103.192:422): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff7bec850 a2=3 a3=1 items=0 ppid=1 pid=4958 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:03.192000 audit[4958]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff7bec850 a2=3 a3=1 items=0 ppid=1 pid=4958 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:03.277862 kernel: audit: type=1327 audit(1757117103.192:422): proctitle=737368643A20636F7265205B707269765D Sep 6 00:05:03.192000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:05:03.296576 kernel: audit: type=1105 audit(1757117103.279:423): pid=4958 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:03.279000 audit[4958]: USER_START pid=4958 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:03.314986 env[1848]: 2025-09-06 00:05:02.785 [INFO][4893] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" Sep 6 00:05:03.314986 env[1848]: 2025-09-06 00:05:02.786 [INFO][4893] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" iface="eth0" netns="/var/run/netns/cni-891af302-fe33-0c4a-640b-11f1e696aa27" Sep 6 00:05:03.314986 env[1848]: 2025-09-06 00:05:02.789 [INFO][4893] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" iface="eth0" netns="/var/run/netns/cni-891af302-fe33-0c4a-640b-11f1e696aa27" Sep 6 00:05:03.314986 env[1848]: 2025-09-06 00:05:02.792 [INFO][4893] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" iface="eth0" netns="/var/run/netns/cni-891af302-fe33-0c4a-640b-11f1e696aa27" Sep 6 00:05:03.314986 env[1848]: 2025-09-06 00:05:02.804 [INFO][4893] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" Sep 6 00:05:03.314986 env[1848]: 2025-09-06 00:05:02.804 [INFO][4893] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" Sep 6 00:05:03.314986 env[1848]: 2025-09-06 00:05:03.164 [INFO][4944] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" HandleID="k8s-pod-network.84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" Workload="ip--172--31--24--61-k8s-calico--kube--controllers--764999b789--c2t8l-eth0" Sep 6 00:05:03.314986 env[1848]: 2025-09-06 00:05:03.165 [INFO][4944] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:05:03.314986 env[1848]: 2025-09-06 00:05:03.165 [INFO][4944] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:05:03.314986 env[1848]: 2025-09-06 00:05:03.212 [WARNING][4944] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" HandleID="k8s-pod-network.84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" Workload="ip--172--31--24--61-k8s-calico--kube--controllers--764999b789--c2t8l-eth0" Sep 6 00:05:03.314986 env[1848]: 2025-09-06 00:05:03.212 [INFO][4944] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" HandleID="k8s-pod-network.84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" Workload="ip--172--31--24--61-k8s-calico--kube--controllers--764999b789--c2t8l-eth0" Sep 6 00:05:03.314986 env[1848]: 2025-09-06 00:05:03.237 [INFO][4944] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:05:03.314986 env[1848]: 2025-09-06 00:05:03.297 [INFO][4893] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" Sep 6 00:05:03.314986 env[1848]: time="2025-09-06T00:05:03.312337164Z" level=info msg="TearDown network for sandbox \"84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb\" successfully" Sep 6 00:05:03.314986 env[1848]: time="2025-09-06T00:05:03.312397712Z" level=info msg="StopPodSandbox for \"84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb\" returns successfully" Sep 6 00:05:03.316698 env[1848]: time="2025-09-06T00:05:03.316618467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-764999b789-c2t8l,Uid:92c99ea2-eb5a-48a9-9626-85e265ce8b17,Namespace:calico-system,Attempt:1,}" Sep 6 00:05:03.328661 kernel: audit: type=1103 audit(1757117103.286:424): pid=4991 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:03.286000 audit[4991]: CRED_ACQ pid=4991 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:03.370123 env[1848]: 2025-09-06 00:05:02.796 [INFO][4892] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" Sep 6 00:05:03.370123 env[1848]: 2025-09-06 00:05:02.796 [INFO][4892] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" iface="eth0" netns="/var/run/netns/cni-a53735fe-5c6f-f9cf-0b7f-25be3723f753" Sep 6 00:05:03.370123 env[1848]: 2025-09-06 00:05:02.797 [INFO][4892] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" iface="eth0" netns="/var/run/netns/cni-a53735fe-5c6f-f9cf-0b7f-25be3723f753" Sep 6 00:05:03.370123 env[1848]: 2025-09-06 00:05:02.797 [INFO][4892] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" iface="eth0" netns="/var/run/netns/cni-a53735fe-5c6f-f9cf-0b7f-25be3723f753" Sep 6 00:05:03.370123 env[1848]: 2025-09-06 00:05:02.797 [INFO][4892] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" Sep 6 00:05:03.370123 env[1848]: 2025-09-06 00:05:02.797 [INFO][4892] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" Sep 6 00:05:03.370123 env[1848]: 2025-09-06 00:05:03.193 [INFO][4945] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" HandleID="k8s-pod-network.809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" Workload="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--hw6d9-eth0" Sep 6 00:05:03.370123 env[1848]: 2025-09-06 00:05:03.193 [INFO][4945] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:05:03.370123 env[1848]: 2025-09-06 00:05:03.237 [INFO][4945] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:05:03.370123 env[1848]: 2025-09-06 00:05:03.297 [WARNING][4945] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" HandleID="k8s-pod-network.809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" Workload="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--hw6d9-eth0" Sep 6 00:05:03.370123 env[1848]: 2025-09-06 00:05:03.297 [INFO][4945] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" HandleID="k8s-pod-network.809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" Workload="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--hw6d9-eth0" Sep 6 00:05:03.370123 env[1848]: 2025-09-06 00:05:03.300 [INFO][4945] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:05:03.370123 env[1848]: 2025-09-06 00:05:03.331 [INFO][4892] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" Sep 6 00:05:03.373835 env[1848]: time="2025-09-06T00:05:03.373739298Z" level=info msg="TearDown network for sandbox \"809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975\" successfully" Sep 6 00:05:03.374344 env[1848]: time="2025-09-06T00:05:03.374094130Z" level=info msg="StopPodSandbox for \"809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975\" returns successfully" Sep 6 00:05:03.381239 env[1848]: time="2025-09-06T00:05:03.381155957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-797f9c6c85-hw6d9,Uid:1c9fcc46-cf36-4206-bdd4-bb1b1b8568f2,Namespace:calico-apiserver,Attempt:1,}" Sep 6 00:05:03.398312 env[1848]: time="2025-09-06T00:05:03.396616995Z" level=info msg="StartContainer for \"d1d9d9fd6c9d106ce00c2ae97d7f7c12c9cbee8fcea11f88cfe2a2ddbd4ea76c\" returns successfully" Sep 6 00:05:03.562579 env[1848]: time="2025-09-06T00:05:03.562445201Z" level=info msg="StartContainer for \"f17f3eb163a2eb102e5aeb3780e552bfb0d5fdc1fcba446b6584c4bf25e5e07a\" returns successfully" Sep 6 00:05:03.634038 systemd[1]: run-netns-cni\x2da53735fe\x2d5c6f\x2df9cf\x2d0b7f\x2d25be3723f753.mount: Deactivated successfully. Sep 6 00:05:03.634326 systemd[1]: run-netns-cni\x2d891af302\x2dfe33\x2d0c4a\x2d640b\x2d11f1e696aa27.mount: Deactivated successfully. Sep 6 00:05:03.746859 kubelet[2926]: I0906 00:05:03.746754 2926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-dz8br" podStartSLOduration=54.746707065 podStartE2EDuration="54.746707065s" podCreationTimestamp="2025-09-06 00:04:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:05:03.745140095 +0000 UTC m=+59.882616043" watchObservedRunningTime="2025-09-06 00:05:03.746707065 +0000 UTC m=+59.884182821" Sep 6 00:05:03.792365 kubelet[2926]: I0906 00:05:03.791736 2926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-mt5pq" podStartSLOduration=54.791711696 podStartE2EDuration="54.791711696s" podCreationTimestamp="2025-09-06 00:04:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:05:03.791347625 +0000 UTC m=+59.928823429" watchObservedRunningTime="2025-09-06 00:05:03.791711696 +0000 UTC m=+59.929187464" Sep 6 00:05:03.815723 env[1848]: time="2025-09-06T00:05:03.815551188Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:05:03.826302 env[1848]: time="2025-09-06T00:05:03.826018535Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:05:03.846247 env[1848]: time="2025-09-06T00:05:03.846166610Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:05:03.867843 env[1848]: time="2025-09-06T00:05:03.867783261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 6 00:05:03.878589 sshd[4958]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:03.884680 env[1848]: time="2025-09-06T00:05:03.884588921Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:05:03.890000 audit[4958]: USER_END pid=4958 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:03.905006 systemd[1]: sshd@7-172.31.24.61:22-147.75.109.163:54596.service: Deactivated successfully. Sep 6 00:05:03.906725 systemd[1]: session-8.scope: Deactivated successfully. Sep 6 00:05:03.909636 env[1848]: time="2025-09-06T00:05:03.909573282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 6 00:05:03.914470 systemd-logind[1837]: Session 8 logged out. Waiting for processes to exit. Sep 6 00:05:03.934306 kernel: audit: type=1106 audit(1757117103.890:425): pid=4958 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:03.934438 kernel: audit: type=1104 audit(1757117103.890:426): pid=4958 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:03.890000 audit[4958]: CRED_DISP pid=4958 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:03.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.24.61:22-147.75.109.163:54596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:03.945335 kernel: audit: type=1131 audit(1757117103.905:427): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.24.61:22-147.75.109.163:54596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:03.946313 systemd-logind[1837]: Removed session 8. Sep 6 00:05:03.956913 env[1848]: time="2025-09-06T00:05:03.956852508Z" level=info msg="CreateContainer within sandbox \"c63311d4d5b771442da034ffe7043a2e8151ecb16ce4e44c545dd42066480e2b\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 6 00:05:03.995000 audit[5058]: NETFILTER_CFG table=filter:109 family=2 entries=20 op=nft_register_rule pid=5058 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:05:03.995000 audit[5058]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcc3ec840 a2=0 a3=1 items=0 ppid=3026 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:04.002414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1701688402.mount: Deactivated successfully. Sep 6 00:05:03.995000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:05:04.015000 audit[5058]: NETFILTER_CFG table=nat:110 family=2 entries=14 op=nft_register_rule pid=5058 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:05:04.015000 audit[5058]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffcc3ec840 a2=0 a3=1 items=0 ppid=3026 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:04.015000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:05:04.034456 env[1848]: time="2025-09-06T00:05:04.034368477Z" level=info msg="CreateContainer within sandbox \"c63311d4d5b771442da034ffe7043a2e8151ecb16ce4e44c545dd42066480e2b\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"cfb0b47b1e6f3535e8477c871e2e57213eff6f03ada34575a1f81c9653ad3d44\"" Sep 6 00:05:04.039582 env[1848]: time="2025-09-06T00:05:04.039518734Z" level=info msg="StartContainer for \"cfb0b47b1e6f3535e8477c871e2e57213eff6f03ada34575a1f81c9653ad3d44\"" Sep 6 00:05:04.060000 audit[5062]: NETFILTER_CFG table=filter:111 family=2 entries=17 op=nft_register_rule pid=5062 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:05:04.060000 audit[5062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffeee7a1f0 a2=0 a3=1 items=0 ppid=3026 pid=5062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:04.060000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:05:04.085571 env[1848]: time="2025-09-06T00:05:04.082401135Z" level=info msg="StopPodSandbox for \"e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d\"" Sep 6 00:05:04.094000 audit[5062]: NETFILTER_CFG table=nat:112 family=2 entries=47 op=nft_register_chain pid=5062 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:05:04.094000 audit[5062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffeee7a1f0 a2=0 a3=1 items=0 ppid=3026 pid=5062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:04.094000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:05:04.254992 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:05:04.255092 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali0fb53521b12: link becomes ready Sep 6 00:05:04.254662 systemd-networkd[1511]: cali0fb53521b12: Link UP Sep 6 00:05:04.255091 systemd-networkd[1511]: cali0fb53521b12: Gained carrier Sep 6 00:05:04.320716 env[1848]: 2025-09-06 00:05:03.564 [INFO][4916] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--61-k8s-csi--node--driver--zpqph-eth0 csi-node-driver- calico-system 8f59dc9d-44f6-4633-8546-11f2219b7da2 950 0 2025-09-06 00:04:31 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-24-61 csi-node-driver-zpqph eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0fb53521b12 [] [] }} ContainerID="6cee43d7be3a590307a93668c8808b67c59fa611300d2d84e6230853b9e04207" Namespace="calico-system" Pod="csi-node-driver-zpqph" WorkloadEndpoint="ip--172--31--24--61-k8s-csi--node--driver--zpqph-" Sep 6 00:05:04.320716 env[1848]: 2025-09-06 00:05:03.564 [INFO][4916] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6cee43d7be3a590307a93668c8808b67c59fa611300d2d84e6230853b9e04207" Namespace="calico-system" Pod="csi-node-driver-zpqph" WorkloadEndpoint="ip--172--31--24--61-k8s-csi--node--driver--zpqph-eth0" Sep 6 00:05:04.320716 env[1848]: 2025-09-06 00:05:04.098 [INFO][5038] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6cee43d7be3a590307a93668c8808b67c59fa611300d2d84e6230853b9e04207" HandleID="k8s-pod-network.6cee43d7be3a590307a93668c8808b67c59fa611300d2d84e6230853b9e04207" Workload="ip--172--31--24--61-k8s-csi--node--driver--zpqph-eth0" Sep 6 00:05:04.320716 env[1848]: 2025-09-06 00:05:04.099 [INFO][5038] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6cee43d7be3a590307a93668c8808b67c59fa611300d2d84e6230853b9e04207" HandleID="k8s-pod-network.6cee43d7be3a590307a93668c8808b67c59fa611300d2d84e6230853b9e04207" Workload="ip--172--31--24--61-k8s-csi--node--driver--zpqph-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000317610), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-61", "pod":"csi-node-driver-zpqph", "timestamp":"2025-09-06 00:05:04.098866165 +0000 UTC"}, Hostname:"ip-172-31-24-61", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:05:04.320716 env[1848]: 2025-09-06 00:05:04.099 [INFO][5038] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:05:04.320716 env[1848]: 2025-09-06 00:05:04.099 [INFO][5038] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:05:04.320716 env[1848]: 2025-09-06 00:05:04.099 [INFO][5038] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-61' Sep 6 00:05:04.320716 env[1848]: 2025-09-06 00:05:04.151 [INFO][5038] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6cee43d7be3a590307a93668c8808b67c59fa611300d2d84e6230853b9e04207" host="ip-172-31-24-61" Sep 6 00:05:04.320716 env[1848]: 2025-09-06 00:05:04.168 [INFO][5038] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-61" Sep 6 00:05:04.320716 env[1848]: 2025-09-06 00:05:04.193 [INFO][5038] ipam/ipam.go 511: Trying affinity for 192.168.40.128/26 host="ip-172-31-24-61" Sep 6 00:05:04.320716 env[1848]: 2025-09-06 00:05:04.200 [INFO][5038] ipam/ipam.go 158: Attempting to load block cidr=192.168.40.128/26 host="ip-172-31-24-61" Sep 6 00:05:04.320716 env[1848]: 2025-09-06 00:05:04.207 [INFO][5038] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.40.128/26 host="ip-172-31-24-61" Sep 6 00:05:04.320716 env[1848]: 2025-09-06 00:05:04.207 [INFO][5038] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.40.128/26 handle="k8s-pod-network.6cee43d7be3a590307a93668c8808b67c59fa611300d2d84e6230853b9e04207" host="ip-172-31-24-61" Sep 6 00:05:04.320716 env[1848]: 2025-09-06 00:05:04.211 [INFO][5038] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6cee43d7be3a590307a93668c8808b67c59fa611300d2d84e6230853b9e04207 Sep 6 00:05:04.320716 env[1848]: 2025-09-06 00:05:04.219 [INFO][5038] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.40.128/26 handle="k8s-pod-network.6cee43d7be3a590307a93668c8808b67c59fa611300d2d84e6230853b9e04207" host="ip-172-31-24-61" Sep 6 00:05:04.320716 env[1848]: 2025-09-06 00:05:04.233 [INFO][5038] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.40.134/26] block=192.168.40.128/26 handle="k8s-pod-network.6cee43d7be3a590307a93668c8808b67c59fa611300d2d84e6230853b9e04207" host="ip-172-31-24-61" Sep 6 00:05:04.320716 env[1848]: 2025-09-06 00:05:04.233 [INFO][5038] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.40.134/26] handle="k8s-pod-network.6cee43d7be3a590307a93668c8808b67c59fa611300d2d84e6230853b9e04207" host="ip-172-31-24-61" Sep 6 00:05:04.320716 env[1848]: 2025-09-06 00:05:04.234 [INFO][5038] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:05:04.320716 env[1848]: 2025-09-06 00:05:04.234 [INFO][5038] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.40.134/26] IPv6=[] ContainerID="6cee43d7be3a590307a93668c8808b67c59fa611300d2d84e6230853b9e04207" HandleID="k8s-pod-network.6cee43d7be3a590307a93668c8808b67c59fa611300d2d84e6230853b9e04207" Workload="ip--172--31--24--61-k8s-csi--node--driver--zpqph-eth0" Sep 6 00:05:04.322126 env[1848]: 2025-09-06 00:05:04.238 [INFO][4916] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6cee43d7be3a590307a93668c8808b67c59fa611300d2d84e6230853b9e04207" Namespace="calico-system" Pod="csi-node-driver-zpqph" WorkloadEndpoint="ip--172--31--24--61-k8s-csi--node--driver--zpqph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--61-k8s-csi--node--driver--zpqph-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8f59dc9d-44f6-4633-8546-11f2219b7da2", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 4, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-61", ContainerID:"", Pod:"csi-node-driver-zpqph", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.40.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0fb53521b12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:05:04.322126 env[1848]: 2025-09-06 00:05:04.238 [INFO][4916] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.134/32] ContainerID="6cee43d7be3a590307a93668c8808b67c59fa611300d2d84e6230853b9e04207" Namespace="calico-system" Pod="csi-node-driver-zpqph" WorkloadEndpoint="ip--172--31--24--61-k8s-csi--node--driver--zpqph-eth0" Sep 6 00:05:04.322126 env[1848]: 2025-09-06 00:05:04.238 [INFO][4916] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0fb53521b12 ContainerID="6cee43d7be3a590307a93668c8808b67c59fa611300d2d84e6230853b9e04207" Namespace="calico-system" Pod="csi-node-driver-zpqph" WorkloadEndpoint="ip--172--31--24--61-k8s-csi--node--driver--zpqph-eth0" Sep 6 00:05:04.322126 env[1848]: 2025-09-06 00:05:04.259 [INFO][4916] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6cee43d7be3a590307a93668c8808b67c59fa611300d2d84e6230853b9e04207" Namespace="calico-system" Pod="csi-node-driver-zpqph" WorkloadEndpoint="ip--172--31--24--61-k8s-csi--node--driver--zpqph-eth0" Sep 6 00:05:04.322126 env[1848]: 2025-09-06 00:05:04.265 [INFO][4916] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6cee43d7be3a590307a93668c8808b67c59fa611300d2d84e6230853b9e04207" Namespace="calico-system" Pod="csi-node-driver-zpqph" WorkloadEndpoint="ip--172--31--24--61-k8s-csi--node--driver--zpqph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--61-k8s-csi--node--driver--zpqph-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8f59dc9d-44f6-4633-8546-11f2219b7da2", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 4, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-61", ContainerID:"6cee43d7be3a590307a93668c8808b67c59fa611300d2d84e6230853b9e04207", Pod:"csi-node-driver-zpqph", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.40.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0fb53521b12", MAC:"f2:f3:e8:4c:d9:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:05:04.322126 env[1848]: 2025-09-06 00:05:04.311 [INFO][4916] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6cee43d7be3a590307a93668c8808b67c59fa611300d2d84e6230853b9e04207" Namespace="calico-system" Pod="csi-node-driver-zpqph" WorkloadEndpoint="ip--172--31--24--61-k8s-csi--node--driver--zpqph-eth0" Sep 6 00:05:04.361000 audit[5118]: NETFILTER_CFG table=filter:113 family=2 entries=48 op=nft_register_chain pid=5118 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:05:04.361000 audit[5118]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23124 a0=3 a1=ffffe542f1c0 a2=0 a3=ffff9c0cefa8 items=0 ppid=4211 pid=5118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:04.361000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:05:04.565757 env[1848]: time="2025-09-06T00:05:04.565639314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:05:04.572755 env[1848]: time="2025-09-06T00:05:04.572409260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:05:04.573126 env[1848]: time="2025-09-06T00:05:04.573014893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:05:04.573756 env[1848]: time="2025-09-06T00:05:04.573680991Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6cee43d7be3a590307a93668c8808b67c59fa611300d2d84e6230853b9e04207 pid=5142 runtime=io.containerd.runc.v2 Sep 6 00:05:04.696742 systemd-networkd[1511]: calie29303d78ea: Link UP Sep 6 00:05:04.699990 systemd-networkd[1511]: calie29303d78ea: Gained carrier Sep 6 00:05:04.700215 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie29303d78ea: link becomes ready Sep 6 00:05:04.757997 env[1848]: 2025-09-06 00:05:04.151 [INFO][5010] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--61-k8s-calico--kube--controllers--764999b789--c2t8l-eth0 calico-kube-controllers-764999b789- calico-system 92c99ea2-eb5a-48a9-9626-85e265ce8b17 977 0 2025-09-06 00:04:31 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:764999b789 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-24-61 calico-kube-controllers-764999b789-c2t8l eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie29303d78ea [] [] }} ContainerID="eb7a032dbd814d4baf823b59cd5d62efd8ad669504c619c7ca7cf8f91ed031e5" Namespace="calico-system" Pod="calico-kube-controllers-764999b789-c2t8l" WorkloadEndpoint="ip--172--31--24--61-k8s-calico--kube--controllers--764999b789--c2t8l-" Sep 6 00:05:04.757997 env[1848]: 2025-09-06 00:05:04.151 [INFO][5010] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb7a032dbd814d4baf823b59cd5d62efd8ad669504c619c7ca7cf8f91ed031e5" Namespace="calico-system" Pod="calico-kube-controllers-764999b789-c2t8l" WorkloadEndpoint="ip--172--31--24--61-k8s-calico--kube--controllers--764999b789--c2t8l-eth0" Sep 6 00:05:04.757997 env[1848]: 2025-09-06 00:05:04.345 [INFO][5092] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb7a032dbd814d4baf823b59cd5d62efd8ad669504c619c7ca7cf8f91ed031e5" HandleID="k8s-pod-network.eb7a032dbd814d4baf823b59cd5d62efd8ad669504c619c7ca7cf8f91ed031e5" Workload="ip--172--31--24--61-k8s-calico--kube--controllers--764999b789--c2t8l-eth0" Sep 6 00:05:04.757997 env[1848]: 2025-09-06 00:05:04.346 [INFO][5092] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eb7a032dbd814d4baf823b59cd5d62efd8ad669504c619c7ca7cf8f91ed031e5" HandleID="k8s-pod-network.eb7a032dbd814d4baf823b59cd5d62efd8ad669504c619c7ca7cf8f91ed031e5" Workload="ip--172--31--24--61-k8s-calico--kube--controllers--764999b789--c2t8l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cbc70), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-61", "pod":"calico-kube-controllers-764999b789-c2t8l", "timestamp":"2025-09-06 00:05:04.345920593 +0000 UTC"}, Hostname:"ip-172-31-24-61", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:05:04.757997 env[1848]: 2025-09-06 00:05:04.346 [INFO][5092] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:05:04.757997 env[1848]: 2025-09-06 00:05:04.346 [INFO][5092] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:05:04.757997 env[1848]: 2025-09-06 00:05:04.346 [INFO][5092] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-61' Sep 6 00:05:04.757997 env[1848]: 2025-09-06 00:05:04.413 [INFO][5092] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eb7a032dbd814d4baf823b59cd5d62efd8ad669504c619c7ca7cf8f91ed031e5" host="ip-172-31-24-61" Sep 6 00:05:04.757997 env[1848]: 2025-09-06 00:05:04.458 [INFO][5092] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-61" Sep 6 00:05:04.757997 env[1848]: 2025-09-06 00:05:04.518 [INFO][5092] ipam/ipam.go 511: Trying affinity for 192.168.40.128/26 host="ip-172-31-24-61" Sep 6 00:05:04.757997 env[1848]: 2025-09-06 00:05:04.534 [INFO][5092] ipam/ipam.go 158: Attempting to load block cidr=192.168.40.128/26 host="ip-172-31-24-61" Sep 6 00:05:04.757997 env[1848]: 2025-09-06 00:05:04.562 [INFO][5092] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.40.128/26 host="ip-172-31-24-61" Sep 6 00:05:04.757997 env[1848]: 2025-09-06 00:05:04.563 [INFO][5092] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.40.128/26 handle="k8s-pod-network.eb7a032dbd814d4baf823b59cd5d62efd8ad669504c619c7ca7cf8f91ed031e5" host="ip-172-31-24-61" Sep 6 00:05:04.757997 env[1848]: 2025-09-06 00:05:04.589 [INFO][5092] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.eb7a032dbd814d4baf823b59cd5d62efd8ad669504c619c7ca7cf8f91ed031e5 Sep 6 00:05:04.757997 env[1848]: 2025-09-06 00:05:04.625 [INFO][5092] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.40.128/26 handle="k8s-pod-network.eb7a032dbd814d4baf823b59cd5d62efd8ad669504c619c7ca7cf8f91ed031e5" host="ip-172-31-24-61" Sep 6 00:05:04.757997 env[1848]: 2025-09-06 00:05:04.660 [INFO][5092] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.40.135/26] block=192.168.40.128/26 handle="k8s-pod-network.eb7a032dbd814d4baf823b59cd5d62efd8ad669504c619c7ca7cf8f91ed031e5" host="ip-172-31-24-61" Sep 6 00:05:04.757997 env[1848]: 2025-09-06 00:05:04.660 [INFO][5092] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.40.135/26] handle="k8s-pod-network.eb7a032dbd814d4baf823b59cd5d62efd8ad669504c619c7ca7cf8f91ed031e5" host="ip-172-31-24-61" Sep 6 00:05:04.757997 env[1848]: 2025-09-06 00:05:04.660 [INFO][5092] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:05:04.757997 env[1848]: 2025-09-06 00:05:04.660 [INFO][5092] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.40.135/26] IPv6=[] ContainerID="eb7a032dbd814d4baf823b59cd5d62efd8ad669504c619c7ca7cf8f91ed031e5" HandleID="k8s-pod-network.eb7a032dbd814d4baf823b59cd5d62efd8ad669504c619c7ca7cf8f91ed031e5" Workload="ip--172--31--24--61-k8s-calico--kube--controllers--764999b789--c2t8l-eth0" Sep 6 00:05:04.759455 env[1848]: 2025-09-06 00:05:04.668 [INFO][5010] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb7a032dbd814d4baf823b59cd5d62efd8ad669504c619c7ca7cf8f91ed031e5" Namespace="calico-system" Pod="calico-kube-controllers-764999b789-c2t8l" WorkloadEndpoint="ip--172--31--24--61-k8s-calico--kube--controllers--764999b789--c2t8l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--61-k8s-calico--kube--controllers--764999b789--c2t8l-eth0", GenerateName:"calico-kube-controllers-764999b789-", Namespace:"calico-system", SelfLink:"", UID:"92c99ea2-eb5a-48a9-9626-85e265ce8b17", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 4, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"764999b789", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-61", ContainerID:"", Pod:"calico-kube-controllers-764999b789-c2t8l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie29303d78ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:05:04.759455 env[1848]: 2025-09-06 00:05:04.668 [INFO][5010] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.135/32] ContainerID="eb7a032dbd814d4baf823b59cd5d62efd8ad669504c619c7ca7cf8f91ed031e5" Namespace="calico-system" Pod="calico-kube-controllers-764999b789-c2t8l" WorkloadEndpoint="ip--172--31--24--61-k8s-calico--kube--controllers--764999b789--c2t8l-eth0" Sep 6 00:05:04.759455 env[1848]: 2025-09-06 00:05:04.668 [INFO][5010] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie29303d78ea ContainerID="eb7a032dbd814d4baf823b59cd5d62efd8ad669504c619c7ca7cf8f91ed031e5" Namespace="calico-system" Pod="calico-kube-controllers-764999b789-c2t8l" WorkloadEndpoint="ip--172--31--24--61-k8s-calico--kube--controllers--764999b789--c2t8l-eth0" Sep 6 00:05:04.759455 env[1848]: 2025-09-06 00:05:04.702 [INFO][5010] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb7a032dbd814d4baf823b59cd5d62efd8ad669504c619c7ca7cf8f91ed031e5" Namespace="calico-system" Pod="calico-kube-controllers-764999b789-c2t8l" WorkloadEndpoint="ip--172--31--24--61-k8s-calico--kube--controllers--764999b789--c2t8l-eth0" Sep 6 00:05:04.759455 env[1848]: 2025-09-06 00:05:04.711 [INFO][5010] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb7a032dbd814d4baf823b59cd5d62efd8ad669504c619c7ca7cf8f91ed031e5" Namespace="calico-system" Pod="calico-kube-controllers-764999b789-c2t8l" WorkloadEndpoint="ip--172--31--24--61-k8s-calico--kube--controllers--764999b789--c2t8l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--61-k8s-calico--kube--controllers--764999b789--c2t8l-eth0", GenerateName:"calico-kube-controllers-764999b789-", Namespace:"calico-system", SelfLink:"", UID:"92c99ea2-eb5a-48a9-9626-85e265ce8b17", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 4, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"764999b789", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-61", ContainerID:"eb7a032dbd814d4baf823b59cd5d62efd8ad669504c619c7ca7cf8f91ed031e5", Pod:"calico-kube-controllers-764999b789-c2t8l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie29303d78ea", MAC:"be:7f:42:57:36:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:05:04.759455 env[1848]: 2025-09-06 00:05:04.736 [INFO][5010] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb7a032dbd814d4baf823b59cd5d62efd8ad669504c619c7ca7cf8f91ed031e5" Namespace="calico-system" Pod="calico-kube-controllers-764999b789-c2t8l" WorkloadEndpoint="ip--172--31--24--61-k8s-calico--kube--controllers--764999b789--c2t8l-eth0" Sep 6 00:05:04.769000 audit[5165]: NETFILTER_CFG table=filter:114 family=2 entries=52 op=nft_register_chain pid=5165 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:05:04.769000 audit[5165]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24312 a0=3 a1=ffffeadcc770 a2=0 a3=ffffa10d2fa8 items=0 ppid=4211 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:04.769000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:05:04.816453 systemd[1]: run-containerd-runc-k8s.io-6cee43d7be3a590307a93668c8808b67c59fa611300d2d84e6230853b9e04207-runc.dU2DEc.mount: Deactivated successfully. Sep 6 00:05:04.910981 env[1848]: time="2025-09-06T00:05:04.910917842Z" level=info msg="StartContainer for \"cfb0b47b1e6f3535e8477c871e2e57213eff6f03ada34575a1f81c9653ad3d44\" returns successfully" Sep 6 00:05:05.034064 kubelet[2926]: E0906 00:05:05.031883 2926 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/pod8f59dc9d-44f6-4633-8546-11f2219b7da2/6cee43d7be3a590307a93668c8808b67c59fa611300d2d84e6230853b9e04207\": RecentStats: unable to find data in memory cache]" Sep 6 00:05:05.051978 env[1848]: time="2025-09-06T00:05:05.051915435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zpqph,Uid:8f59dc9d-44f6-4633-8546-11f2219b7da2,Namespace:calico-system,Attempt:1,} returns sandbox id \"6cee43d7be3a590307a93668c8808b67c59fa611300d2d84e6230853b9e04207\"" Sep 6 00:05:05.062139 env[1848]: time="2025-09-06T00:05:05.061980345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:05:05.062591 env[1848]: time="2025-09-06T00:05:05.062513828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:05:05.062864 env[1848]: time="2025-09-06T00:05:05.062770736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:05:05.063611 env[1848]: time="2025-09-06T00:05:05.063478798Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eb7a032dbd814d4baf823b59cd5d62efd8ad669504c619c7ca7cf8f91ed031e5 pid=5206 runtime=io.containerd.runc.v2 Sep 6 00:05:05.074548 systemd-networkd[1511]: calie9f7ab0e601: Link UP Sep 6 00:05:05.079434 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie9f7ab0e601: link becomes ready Sep 6 00:05:05.078997 systemd-networkd[1511]: calie9f7ab0e601: Gained carrier Sep 6 00:05:05.084565 env[1848]: 2025-09-06 00:05:04.512 [WARNING][5083] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" WorkloadEndpoint="ip--172--31--24--61-k8s-whisker--b6bdf98bc--kchk5-eth0" Sep 6 00:05:05.084565 env[1848]: 2025-09-06 00:05:04.514 [INFO][5083] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" Sep 6 00:05:05.084565 env[1848]: 2025-09-06 00:05:04.514 [INFO][5083] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" iface="eth0" netns="" Sep 6 00:05:05.084565 env[1848]: 2025-09-06 00:05:04.514 [INFO][5083] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" Sep 6 00:05:05.084565 env[1848]: 2025-09-06 00:05:04.514 [INFO][5083] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" Sep 6 00:05:05.084565 env[1848]: 2025-09-06 00:05:04.934 [INFO][5140] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" HandleID="k8s-pod-network.e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" Workload="ip--172--31--24--61-k8s-whisker--b6bdf98bc--kchk5-eth0" Sep 6 00:05:05.084565 env[1848]: 2025-09-06 00:05:04.935 [INFO][5140] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:05:05.084565 env[1848]: 2025-09-06 00:05:05.049 [INFO][5140] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:05:05.084565 env[1848]: 2025-09-06 00:05:05.066 [WARNING][5140] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" HandleID="k8s-pod-network.e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" Workload="ip--172--31--24--61-k8s-whisker--b6bdf98bc--kchk5-eth0" Sep 6 00:05:05.084565 env[1848]: 2025-09-06 00:05:05.066 [INFO][5140] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" HandleID="k8s-pod-network.e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" Workload="ip--172--31--24--61-k8s-whisker--b6bdf98bc--kchk5-eth0" Sep 6 00:05:05.084565 env[1848]: 2025-09-06 00:05:05.070 [INFO][5140] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:05:05.084565 env[1848]: 2025-09-06 00:05:05.081 [INFO][5083] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" Sep 6 00:05:05.089690 env[1848]: time="2025-09-06T00:05:05.089621091Z" level=info msg="TearDown network for sandbox \"e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d\" successfully" Sep 6 00:05:05.089973 env[1848]: time="2025-09-06T00:05:05.089912209Z" level=info msg="StopPodSandbox for \"e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d\" returns successfully" Sep 6 00:05:05.091381 env[1848]: time="2025-09-06T00:05:05.091333986Z" level=info msg="RemovePodSandbox for \"e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d\"" Sep 6 00:05:05.092019 env[1848]: time="2025-09-06T00:05:05.091951957Z" level=info msg="Forcibly stopping sandbox \"e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d\"" Sep 6 00:05:05.156414 env[1848]: 2025-09-06 00:05:04.175 [INFO][5022] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--hw6d9-eth0 calico-apiserver-797f9c6c85- calico-apiserver 1c9fcc46-cf36-4206-bdd4-bb1b1b8568f2 980 0 2025-09-06 00:04:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:797f9c6c85 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-24-61 calico-apiserver-797f9c6c85-hw6d9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie9f7ab0e601 [] [] }} ContainerID="48dc23c262ce56081ef0eb0d8b65c1734f9b65c6f0f104c00de819bcab7fac62" Namespace="calico-apiserver" Pod="calico-apiserver-797f9c6c85-hw6d9" WorkloadEndpoint="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--hw6d9-" Sep 6 00:05:05.156414 env[1848]: 2025-09-06 00:05:04.180 [INFO][5022] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="48dc23c262ce56081ef0eb0d8b65c1734f9b65c6f0f104c00de819bcab7fac62" Namespace="calico-apiserver" Pod="calico-apiserver-797f9c6c85-hw6d9" WorkloadEndpoint="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--hw6d9-eth0" Sep 6 00:05:05.156414 env[1848]: 2025-09-06 00:05:04.886 [INFO][5102] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="48dc23c262ce56081ef0eb0d8b65c1734f9b65c6f0f104c00de819bcab7fac62" HandleID="k8s-pod-network.48dc23c262ce56081ef0eb0d8b65c1734f9b65c6f0f104c00de819bcab7fac62" Workload="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--hw6d9-eth0" Sep 6 00:05:05.156414 env[1848]: 2025-09-06 00:05:04.886 [INFO][5102] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="48dc23c262ce56081ef0eb0d8b65c1734f9b65c6f0f104c00de819bcab7fac62" HandleID="k8s-pod-network.48dc23c262ce56081ef0eb0d8b65c1734f9b65c6f0f104c00de819bcab7fac62" Workload="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--hw6d9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024a290), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-24-61", "pod":"calico-apiserver-797f9c6c85-hw6d9", "timestamp":"2025-09-06 00:05:04.886279333 +0000 UTC"}, Hostname:"ip-172-31-24-61", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 00:05:05.156414 env[1848]: 2025-09-06 00:05:04.887 [INFO][5102] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:05:05.156414 env[1848]: 2025-09-06 00:05:04.887 [INFO][5102] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:05:05.156414 env[1848]: 2025-09-06 00:05:04.887 [INFO][5102] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-61' Sep 6 00:05:05.156414 env[1848]: 2025-09-06 00:05:04.945 [INFO][5102] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.48dc23c262ce56081ef0eb0d8b65c1734f9b65c6f0f104c00de819bcab7fac62" host="ip-172-31-24-61" Sep 6 00:05:05.156414 env[1848]: 2025-09-06 00:05:04.958 [INFO][5102] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-61" Sep 6 00:05:05.156414 env[1848]: 2025-09-06 00:05:04.969 [INFO][5102] ipam/ipam.go 511: Trying affinity for 192.168.40.128/26 host="ip-172-31-24-61" Sep 6 00:05:05.156414 env[1848]: 2025-09-06 00:05:04.975 [INFO][5102] ipam/ipam.go 158: Attempting to load block cidr=192.168.40.128/26 host="ip-172-31-24-61" Sep 6 00:05:05.156414 env[1848]: 2025-09-06 00:05:04.983 [INFO][5102] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.40.128/26 host="ip-172-31-24-61" Sep 6 00:05:05.156414 env[1848]: 2025-09-06 00:05:04.983 [INFO][5102] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.40.128/26 handle="k8s-pod-network.48dc23c262ce56081ef0eb0d8b65c1734f9b65c6f0f104c00de819bcab7fac62" host="ip-172-31-24-61" Sep 6 00:05:05.156414 env[1848]: 2025-09-06 00:05:04.991 [INFO][5102] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.48dc23c262ce56081ef0eb0d8b65c1734f9b65c6f0f104c00de819bcab7fac62 Sep 6 00:05:05.156414 env[1848]: 2025-09-06 00:05:05.007 [INFO][5102] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.40.128/26 handle="k8s-pod-network.48dc23c262ce56081ef0eb0d8b65c1734f9b65c6f0f104c00de819bcab7fac62" host="ip-172-31-24-61" Sep 6 00:05:05.156414 env[1848]: 2025-09-06 00:05:05.036 [INFO][5102] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.40.136/26] block=192.168.40.128/26 handle="k8s-pod-network.48dc23c262ce56081ef0eb0d8b65c1734f9b65c6f0f104c00de819bcab7fac62" host="ip-172-31-24-61" Sep 6 00:05:05.156414 env[1848]: 2025-09-06 00:05:05.036 [INFO][5102] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.40.136/26] handle="k8s-pod-network.48dc23c262ce56081ef0eb0d8b65c1734f9b65c6f0f104c00de819bcab7fac62" host="ip-172-31-24-61" Sep 6 00:05:05.156414 env[1848]: 2025-09-06 00:05:05.036 [INFO][5102] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:05:05.156414 env[1848]: 2025-09-06 00:05:05.036 [INFO][5102] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.40.136/26] IPv6=[] ContainerID="48dc23c262ce56081ef0eb0d8b65c1734f9b65c6f0f104c00de819bcab7fac62" HandleID="k8s-pod-network.48dc23c262ce56081ef0eb0d8b65c1734f9b65c6f0f104c00de819bcab7fac62" Workload="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--hw6d9-eth0" Sep 6 00:05:05.153000 audit[5249]: NETFILTER_CFG table=filter:115 family=2 entries=63 op=nft_register_chain pid=5249 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 00:05:05.153000 audit[5249]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=30664 a0=3 a1=ffffe8d63260 a2=0 a3=ffff9e78bfa8 items=0 ppid=4211 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:05.153000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 00:05:05.158248 env[1848]: 2025-09-06 00:05:05.064 [INFO][5022] cni-plugin/k8s.go 418: Populated endpoint ContainerID="48dc23c262ce56081ef0eb0d8b65c1734f9b65c6f0f104c00de819bcab7fac62" Namespace="calico-apiserver" Pod="calico-apiserver-797f9c6c85-hw6d9" WorkloadEndpoint="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--hw6d9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--hw6d9-eth0", GenerateName:"calico-apiserver-797f9c6c85-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c9fcc46-cf36-4206-bdd4-bb1b1b8568f2", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 4, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"797f9c6c85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-61", ContainerID:"", Pod:"calico-apiserver-797f9c6c85-hw6d9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie9f7ab0e601", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:05:05.158248 env[1848]: 2025-09-06 00:05:05.064 [INFO][5022] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.136/32] ContainerID="48dc23c262ce56081ef0eb0d8b65c1734f9b65c6f0f104c00de819bcab7fac62" Namespace="calico-apiserver" Pod="calico-apiserver-797f9c6c85-hw6d9" WorkloadEndpoint="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--hw6d9-eth0" Sep 6 00:05:05.158248 env[1848]: 2025-09-06 00:05:05.064 [INFO][5022] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie9f7ab0e601 ContainerID="48dc23c262ce56081ef0eb0d8b65c1734f9b65c6f0f104c00de819bcab7fac62" Namespace="calico-apiserver" Pod="calico-apiserver-797f9c6c85-hw6d9" WorkloadEndpoint="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--hw6d9-eth0" Sep 6 00:05:05.158248 env[1848]: 2025-09-06 00:05:05.094 [INFO][5022] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="48dc23c262ce56081ef0eb0d8b65c1734f9b65c6f0f104c00de819bcab7fac62" Namespace="calico-apiserver" Pod="calico-apiserver-797f9c6c85-hw6d9" WorkloadEndpoint="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--hw6d9-eth0" Sep 6 00:05:05.158248 env[1848]: 2025-09-06 00:05:05.095 [INFO][5022] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="48dc23c262ce56081ef0eb0d8b65c1734f9b65c6f0f104c00de819bcab7fac62" Namespace="calico-apiserver" Pod="calico-apiserver-797f9c6c85-hw6d9" WorkloadEndpoint="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--hw6d9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--hw6d9-eth0", GenerateName:"calico-apiserver-797f9c6c85-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c9fcc46-cf36-4206-bdd4-bb1b1b8568f2", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 4, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"797f9c6c85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-61", ContainerID:"48dc23c262ce56081ef0eb0d8b65c1734f9b65c6f0f104c00de819bcab7fac62", Pod:"calico-apiserver-797f9c6c85-hw6d9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie9f7ab0e601", MAC:"6a:94:a5:ce:d4:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:05:05.158248 env[1848]: 2025-09-06 00:05:05.124 [INFO][5022] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="48dc23c262ce56081ef0eb0d8b65c1734f9b65c6f0f104c00de819bcab7fac62" Namespace="calico-apiserver" Pod="calico-apiserver-797f9c6c85-hw6d9" WorkloadEndpoint="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--hw6d9-eth0" Sep 6 00:05:05.239864 env[1848]: time="2025-09-06T00:05:05.239673024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:05:05.239864 env[1848]: time="2025-09-06T00:05:05.239783958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:05:05.240354 env[1848]: time="2025-09-06T00:05:05.240245001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:05:05.241053 env[1848]: time="2025-09-06T00:05:05.240882159Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/48dc23c262ce56081ef0eb0d8b65c1734f9b65c6f0f104c00de819bcab7fac62 pid=5262 runtime=io.containerd.runc.v2 Sep 6 00:05:05.378283 env[1848]: time="2025-09-06T00:05:05.378174241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-764999b789-c2t8l,Uid:92c99ea2-eb5a-48a9-9626-85e265ce8b17,Namespace:calico-system,Attempt:1,} returns sandbox id \"eb7a032dbd814d4baf823b59cd5d62efd8ad669504c619c7ca7cf8f91ed031e5\"" Sep 6 00:05:05.436983 env[1848]: 2025-09-06 00:05:05.283 [WARNING][5243] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" WorkloadEndpoint="ip--172--31--24--61-k8s-whisker--b6bdf98bc--kchk5-eth0" Sep 6 00:05:05.436983 env[1848]: 2025-09-06 00:05:05.284 [INFO][5243] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" Sep 6 00:05:05.436983 env[1848]: 2025-09-06 00:05:05.284 [INFO][5243] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" iface="eth0" netns="" Sep 6 00:05:05.436983 env[1848]: 2025-09-06 00:05:05.284 [INFO][5243] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" Sep 6 00:05:05.436983 env[1848]: 2025-09-06 00:05:05.284 [INFO][5243] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" Sep 6 00:05:05.436983 env[1848]: 2025-09-06 00:05:05.403 [INFO][5292] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" HandleID="k8s-pod-network.e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" Workload="ip--172--31--24--61-k8s-whisker--b6bdf98bc--kchk5-eth0" Sep 6 00:05:05.436983 env[1848]: 2025-09-06 00:05:05.403 [INFO][5292] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:05:05.436983 env[1848]: 2025-09-06 00:05:05.404 [INFO][5292] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:05:05.436983 env[1848]: 2025-09-06 00:05:05.423 [WARNING][5292] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" HandleID="k8s-pod-network.e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" Workload="ip--172--31--24--61-k8s-whisker--b6bdf98bc--kchk5-eth0" Sep 6 00:05:05.436983 env[1848]: 2025-09-06 00:05:05.423 [INFO][5292] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" HandleID="k8s-pod-network.e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" Workload="ip--172--31--24--61-k8s-whisker--b6bdf98bc--kchk5-eth0" Sep 6 00:05:05.436983 env[1848]: 2025-09-06 00:05:05.427 [INFO][5292] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:05:05.436983 env[1848]: 2025-09-06 00:05:05.430 [INFO][5243] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d" Sep 6 00:05:05.437950 env[1848]: time="2025-09-06T00:05:05.437892991Z" level=info msg="TearDown network for sandbox \"e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d\" successfully" Sep 6 00:05:05.446955 env[1848]: time="2025-09-06T00:05:05.446885619Z" level=info msg="RemovePodSandbox \"e1f9ffdcaa4f410abd37ff09ee5f7b21342235edec9db9dd4fd7d231415d157d\" returns successfully" Sep 6 00:05:05.448207 env[1848]: time="2025-09-06T00:05:05.448122965Z" level=info msg="StopPodSandbox for \"7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b\"" Sep 6 00:05:05.522830 env[1848]: time="2025-09-06T00:05:05.522768292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-797f9c6c85-hw6d9,Uid:1c9fcc46-cf36-4206-bdd4-bb1b1b8568f2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"48dc23c262ce56081ef0eb0d8b65c1734f9b65c6f0f104c00de819bcab7fac62\"" Sep 6 00:05:05.695474 env[1848]: 2025-09-06 00:05:05.575 [WARNING][5322] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--61-k8s-goldmane--7988f88666--8wwqq-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"46f8567e-23c4-451e-82e1-50ad6fac0a71", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 4, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-61", ContainerID:"a0ab623f15e98608017ad9220884152e470e514588f60c0ff6dc05dd103e618e", Pod:"goldmane-7988f88666-8wwqq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.40.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali81342935570", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:05:05.695474 env[1848]: 2025-09-06 00:05:05.576 [INFO][5322] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" Sep 6 00:05:05.695474 env[1848]: 2025-09-06 00:05:05.576 [INFO][5322] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" iface="eth0" netns="" Sep 6 00:05:05.695474 env[1848]: 2025-09-06 00:05:05.576 [INFO][5322] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" Sep 6 00:05:05.695474 env[1848]: 2025-09-06 00:05:05.576 [INFO][5322] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" Sep 6 00:05:05.695474 env[1848]: 2025-09-06 00:05:05.664 [INFO][5335] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" HandleID="k8s-pod-network.7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" Workload="ip--172--31--24--61-k8s-goldmane--7988f88666--8wwqq-eth0" Sep 6 00:05:05.695474 env[1848]: 2025-09-06 00:05:05.664 [INFO][5335] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:05:05.695474 env[1848]: 2025-09-06 00:05:05.664 [INFO][5335] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:05:05.695474 env[1848]: 2025-09-06 00:05:05.683 [WARNING][5335] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" HandleID="k8s-pod-network.7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" Workload="ip--172--31--24--61-k8s-goldmane--7988f88666--8wwqq-eth0" Sep 6 00:05:05.695474 env[1848]: 2025-09-06 00:05:05.683 [INFO][5335] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" HandleID="k8s-pod-network.7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" Workload="ip--172--31--24--61-k8s-goldmane--7988f88666--8wwqq-eth0" Sep 6 00:05:05.695474 env[1848]: 2025-09-06 00:05:05.687 [INFO][5335] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:05:05.695474 env[1848]: 2025-09-06 00:05:05.690 [INFO][5322] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" Sep 6 00:05:05.697544 env[1848]: time="2025-09-06T00:05:05.697487114Z" level=info msg="TearDown network for sandbox \"7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b\" successfully" Sep 6 00:05:05.697706 env[1848]: time="2025-09-06T00:05:05.697665378Z" level=info msg="StopPodSandbox for \"7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b\" returns successfully" Sep 6 00:05:05.718280 env[1848]: time="2025-09-06T00:05:05.717020286Z" level=info msg="RemovePodSandbox for \"7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b\"" Sep 6 00:05:05.718280 env[1848]: time="2025-09-06T00:05:05.717156059Z" level=info msg="Forcibly stopping sandbox \"7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b\"" Sep 6 00:05:05.965406 systemd-networkd[1511]: cali0fb53521b12: Gained IPv6LL Sep 6 00:05:06.116968 env[1848]: 2025-09-06 00:05:05.913 [WARNING][5350] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--61-k8s-goldmane--7988f88666--8wwqq-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"46f8567e-23c4-451e-82e1-50ad6fac0a71", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 4, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-61", ContainerID:"a0ab623f15e98608017ad9220884152e470e514588f60c0ff6dc05dd103e618e", Pod:"goldmane-7988f88666-8wwqq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.40.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali81342935570", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:05:06.116968 env[1848]: 2025-09-06 00:05:05.916 [INFO][5350] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" Sep 6 00:05:06.116968 env[1848]: 2025-09-06 00:05:05.916 [INFO][5350] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" iface="eth0" netns="" Sep 6 00:05:06.116968 env[1848]: 2025-09-06 00:05:05.916 [INFO][5350] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" Sep 6 00:05:06.116968 env[1848]: 2025-09-06 00:05:05.916 [INFO][5350] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" Sep 6 00:05:06.116968 env[1848]: 2025-09-06 00:05:06.080 [INFO][5357] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" HandleID="k8s-pod-network.7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" Workload="ip--172--31--24--61-k8s-goldmane--7988f88666--8wwqq-eth0" Sep 6 00:05:06.116968 env[1848]: 2025-09-06 00:05:06.081 [INFO][5357] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:05:06.116968 env[1848]: 2025-09-06 00:05:06.081 [INFO][5357] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:05:06.116968 env[1848]: 2025-09-06 00:05:06.095 [WARNING][5357] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" HandleID="k8s-pod-network.7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" Workload="ip--172--31--24--61-k8s-goldmane--7988f88666--8wwqq-eth0" Sep 6 00:05:06.116968 env[1848]: 2025-09-06 00:05:06.095 [INFO][5357] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" HandleID="k8s-pod-network.7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" Workload="ip--172--31--24--61-k8s-goldmane--7988f88666--8wwqq-eth0" Sep 6 00:05:06.116968 env[1848]: 2025-09-06 00:05:06.099 [INFO][5357] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:05:06.116968 env[1848]: 2025-09-06 00:05:06.110 [INFO][5350] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b" Sep 6 00:05:06.118627 env[1848]: time="2025-09-06T00:05:06.116992928Z" level=info msg="TearDown network for sandbox \"7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b\" successfully" Sep 6 00:05:06.125465 env[1848]: time="2025-09-06T00:05:06.125400580Z" level=info msg="RemovePodSandbox \"7ddbea226dc153895b202dc4fca979bd0a10b6f85f0b1f8e1323a3df2b58377b\" returns successfully" Sep 6 00:05:06.130587 env[1848]: time="2025-09-06T00:05:06.130499604Z" level=info msg="StopPodSandbox for \"f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827\"" Sep 6 00:05:06.213983 systemd-networkd[1511]: calie29303d78ea: Gained IPv6LL Sep 6 00:05:06.345060 env[1848]: 2025-09-06 00:05:06.265 [WARNING][5371] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--61-k8s-coredns--7c65d6cfc9--mt5pq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"695d8e71-3eef-4ad5-94de-576fc3ef9397", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 4, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-61", ContainerID:"fe8be527f5c5bb78aa52c570e1c695036ca9f75a09e1238c516a619768d2d701", Pod:"coredns-7c65d6cfc9-mt5pq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2f4050e29f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:05:06.345060 env[1848]: 2025-09-06 00:05:06.266 [INFO][5371] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" Sep 6 00:05:06.345060 env[1848]: 2025-09-06 00:05:06.266 [INFO][5371] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" iface="eth0" netns="" Sep 6 00:05:06.345060 env[1848]: 2025-09-06 00:05:06.266 [INFO][5371] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" Sep 6 00:05:06.345060 env[1848]: 2025-09-06 00:05:06.266 [INFO][5371] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" Sep 6 00:05:06.345060 env[1848]: 2025-09-06 00:05:06.317 [INFO][5378] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" HandleID="k8s-pod-network.f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" Workload="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--mt5pq-eth0" Sep 6 00:05:06.345060 env[1848]: 2025-09-06 00:05:06.317 [INFO][5378] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:05:06.345060 env[1848]: 2025-09-06 00:05:06.317 [INFO][5378] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:05:06.345060 env[1848]: 2025-09-06 00:05:06.333 [WARNING][5378] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" HandleID="k8s-pod-network.f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" Workload="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--mt5pq-eth0" Sep 6 00:05:06.345060 env[1848]: 2025-09-06 00:05:06.333 [INFO][5378] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" HandleID="k8s-pod-network.f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" Workload="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--mt5pq-eth0" Sep 6 00:05:06.345060 env[1848]: 2025-09-06 00:05:06.337 [INFO][5378] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:05:06.345060 env[1848]: 2025-09-06 00:05:06.342 [INFO][5371] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" Sep 6 00:05:06.347283 env[1848]: time="2025-09-06T00:05:06.347220222Z" level=info msg="TearDown network for sandbox \"f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827\" successfully" Sep 6 00:05:06.351807 env[1848]: time="2025-09-06T00:05:06.351620432Z" level=info msg="StopPodSandbox for \"f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827\" returns successfully" Sep 6 00:05:06.369831 env[1848]: time="2025-09-06T00:05:06.369774218Z" level=info msg="RemovePodSandbox for \"f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827\"" Sep 6 00:05:06.370100 env[1848]: time="2025-09-06T00:05:06.370037847Z" level=info msg="Forcibly stopping sandbox \"f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827\"" Sep 6 00:05:06.469684 systemd-networkd[1511]: calie9f7ab0e601: Gained IPv6LL Sep 6 00:05:06.563814 env[1848]: 2025-09-06 00:05:06.472 [WARNING][5394] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--61-k8s-coredns--7c65d6cfc9--mt5pq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"695d8e71-3eef-4ad5-94de-576fc3ef9397", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 4, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-61", ContainerID:"fe8be527f5c5bb78aa52c570e1c695036ca9f75a09e1238c516a619768d2d701", Pod:"coredns-7c65d6cfc9-mt5pq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2f4050e29f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:05:06.563814 env[1848]: 2025-09-06 00:05:06.473 [INFO][5394] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" Sep 6 00:05:06.563814 env[1848]: 2025-09-06 00:05:06.473 [INFO][5394] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" iface="eth0" netns="" Sep 6 00:05:06.563814 env[1848]: 2025-09-06 00:05:06.473 [INFO][5394] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" Sep 6 00:05:06.563814 env[1848]: 2025-09-06 00:05:06.473 [INFO][5394] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" Sep 6 00:05:06.563814 env[1848]: 2025-09-06 00:05:06.530 [INFO][5401] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" HandleID="k8s-pod-network.f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" Workload="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--mt5pq-eth0" Sep 6 00:05:06.563814 env[1848]: 2025-09-06 00:05:06.530 [INFO][5401] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:05:06.563814 env[1848]: 2025-09-06 00:05:06.530 [INFO][5401] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:05:06.563814 env[1848]: 2025-09-06 00:05:06.548 [WARNING][5401] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" HandleID="k8s-pod-network.f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" Workload="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--mt5pq-eth0" Sep 6 00:05:06.563814 env[1848]: 2025-09-06 00:05:06.548 [INFO][5401] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" HandleID="k8s-pod-network.f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" Workload="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--mt5pq-eth0" Sep 6 00:05:06.563814 env[1848]: 2025-09-06 00:05:06.555 [INFO][5401] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:05:06.563814 env[1848]: 2025-09-06 00:05:06.559 [INFO][5394] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827" Sep 6 00:05:06.565057 env[1848]: time="2025-09-06T00:05:06.563833889Z" level=info msg="TearDown network for sandbox \"f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827\" successfully" Sep 6 00:05:06.583369 env[1848]: time="2025-09-06T00:05:06.580558069Z" level=info msg="RemovePodSandbox \"f6e96eb8e2283e70353153ebf7209e3c320e117f3792f7b6ed687ede5b7f5827\" returns successfully" Sep 6 00:05:06.588219 env[1848]: time="2025-09-06T00:05:06.588123826Z" level=info msg="StopPodSandbox for \"52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d\"" Sep 6 00:05:06.834436 env[1848]: 2025-09-06 00:05:06.696 [WARNING][5423] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--61-k8s-coredns--7c65d6cfc9--dz8br-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4dca1103-4d65-4dd5-9f1d-b159dc97b5c8", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 4, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-61", ContainerID:"24ce57417410d5c5b7c7abde6985213050cbe3fa33dbc94cb81f7a854a7f0b0d", Pod:"coredns-7c65d6cfc9-dz8br", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7406f9a9a1a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:05:06.834436 env[1848]: 2025-09-06 00:05:06.697 [INFO][5423] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" Sep 6 00:05:06.834436 env[1848]: 2025-09-06 00:05:06.697 [INFO][5423] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" iface="eth0" netns="" Sep 6 00:05:06.834436 env[1848]: 2025-09-06 00:05:06.697 [INFO][5423] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" Sep 6 00:05:06.834436 env[1848]: 2025-09-06 00:05:06.697 [INFO][5423] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" Sep 6 00:05:06.834436 env[1848]: 2025-09-06 00:05:06.801 [INFO][5430] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" HandleID="k8s-pod-network.52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" Workload="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--dz8br-eth0" Sep 6 00:05:06.834436 env[1848]: 2025-09-06 00:05:06.802 [INFO][5430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:05:06.834436 env[1848]: 2025-09-06 00:05:06.802 [INFO][5430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:05:06.834436 env[1848]: 2025-09-06 00:05:06.824 [WARNING][5430] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" HandleID="k8s-pod-network.52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" Workload="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--dz8br-eth0" Sep 6 00:05:06.834436 env[1848]: 2025-09-06 00:05:06.824 [INFO][5430] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" HandleID="k8s-pod-network.52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" Workload="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--dz8br-eth0" Sep 6 00:05:06.834436 env[1848]: 2025-09-06 00:05:06.827 [INFO][5430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:05:06.834436 env[1848]: 2025-09-06 00:05:06.831 [INFO][5423] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" Sep 6 00:05:06.836263 env[1848]: time="2025-09-06T00:05:06.834482142Z" level=info msg="TearDown network for sandbox \"52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d\" successfully" Sep 6 00:05:06.836263 env[1848]: time="2025-09-06T00:05:06.834532396Z" level=info msg="StopPodSandbox for \"52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d\" returns successfully" Sep 6 00:05:06.836263 env[1848]: time="2025-09-06T00:05:06.835447872Z" level=info msg="RemovePodSandbox for \"52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d\"" Sep 6 00:05:06.836263 env[1848]: time="2025-09-06T00:05:06.835516713Z" level=info msg="Forcibly stopping sandbox \"52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d\"" Sep 6 00:05:07.002176 env[1848]: 2025-09-06 00:05:06.916 [WARNING][5446] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--61-k8s-coredns--7c65d6cfc9--dz8br-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4dca1103-4d65-4dd5-9f1d-b159dc97b5c8", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 4, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-61", ContainerID:"24ce57417410d5c5b7c7abde6985213050cbe3fa33dbc94cb81f7a854a7f0b0d", Pod:"coredns-7c65d6cfc9-dz8br", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7406f9a9a1a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:05:07.002176 env[1848]: 2025-09-06 00:05:06.916 [INFO][5446] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" Sep 6 00:05:07.002176 env[1848]: 2025-09-06 00:05:06.916 [INFO][5446] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" iface="eth0" netns="" Sep 6 00:05:07.002176 env[1848]: 2025-09-06 00:05:06.916 [INFO][5446] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" Sep 6 00:05:07.002176 env[1848]: 2025-09-06 00:05:06.916 [INFO][5446] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" Sep 6 00:05:07.002176 env[1848]: 2025-09-06 00:05:06.979 [INFO][5453] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" HandleID="k8s-pod-network.52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" Workload="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--dz8br-eth0" Sep 6 00:05:07.002176 env[1848]: 2025-09-06 00:05:06.980 [INFO][5453] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:05:07.002176 env[1848]: 2025-09-06 00:05:06.980 [INFO][5453] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:05:07.002176 env[1848]: 2025-09-06 00:05:06.993 [WARNING][5453] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" HandleID="k8s-pod-network.52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" Workload="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--dz8br-eth0" Sep 6 00:05:07.002176 env[1848]: 2025-09-06 00:05:06.993 [INFO][5453] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" HandleID="k8s-pod-network.52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" Workload="ip--172--31--24--61-k8s-coredns--7c65d6cfc9--dz8br-eth0" Sep 6 00:05:07.002176 env[1848]: 2025-09-06 00:05:06.996 [INFO][5453] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:05:07.002176 env[1848]: 2025-09-06 00:05:06.999 [INFO][5446] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d" Sep 6 00:05:07.003421 env[1848]: time="2025-09-06T00:05:07.002251326Z" level=info msg="TearDown network for sandbox \"52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d\" successfully" Sep 6 00:05:07.020253 env[1848]: time="2025-09-06T00:05:07.020121752Z" level=info msg="RemovePodSandbox \"52c133716278be05d02d86baa6705bb13690deaf8f067f6bd9dc07b46e6df83d\" returns successfully" Sep 6 00:05:07.020969 env[1848]: time="2025-09-06T00:05:07.020916444Z" level=info msg="StopPodSandbox for \"28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120\"" Sep 6 00:05:07.193005 env[1848]: 2025-09-06 00:05:07.108 [WARNING][5468] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--zgcsm-eth0", GenerateName:"calico-apiserver-797f9c6c85-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3def65b-39c6-44ff-8623-e889bb6d6c02", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 4, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"797f9c6c85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-61", ContainerID:"539494acd0258f9df9c9eb730289471e341c61dd2c24ad477092a1afc3ae12a1", Pod:"calico-apiserver-797f9c6c85-zgcsm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8aa86911e1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:05:07.193005 env[1848]: 2025-09-06 00:05:07.112 [INFO][5468] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" Sep 6 00:05:07.193005 env[1848]: 2025-09-06 00:05:07.113 [INFO][5468] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" iface="eth0" netns="" Sep 6 00:05:07.193005 env[1848]: 2025-09-06 00:05:07.113 [INFO][5468] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" Sep 6 00:05:07.193005 env[1848]: 2025-09-06 00:05:07.113 [INFO][5468] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" Sep 6 00:05:07.193005 env[1848]: 2025-09-06 00:05:07.165 [INFO][5475] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" HandleID="k8s-pod-network.28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" Workload="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--zgcsm-eth0" Sep 6 00:05:07.193005 env[1848]: 2025-09-06 00:05:07.165 [INFO][5475] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:05:07.193005 env[1848]: 2025-09-06 00:05:07.166 [INFO][5475] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:05:07.193005 env[1848]: 2025-09-06 00:05:07.184 [WARNING][5475] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" HandleID="k8s-pod-network.28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" Workload="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--zgcsm-eth0" Sep 6 00:05:07.193005 env[1848]: 2025-09-06 00:05:07.184 [INFO][5475] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" HandleID="k8s-pod-network.28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" Workload="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--zgcsm-eth0" Sep 6 00:05:07.193005 env[1848]: 2025-09-06 00:05:07.187 [INFO][5475] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:05:07.193005 env[1848]: 2025-09-06 00:05:07.189 [INFO][5468] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" Sep 6 00:05:07.194172 env[1848]: time="2025-09-06T00:05:07.194118400Z" level=info msg="TearDown network for sandbox \"28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120\" successfully" Sep 6 00:05:07.194357 env[1848]: time="2025-09-06T00:05:07.194319992Z" level=info msg="StopPodSandbox for \"28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120\" returns successfully" Sep 6 00:05:07.195256 env[1848]: time="2025-09-06T00:05:07.195149423Z" level=info msg="RemovePodSandbox for \"28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120\"" Sep 6 00:05:07.195451 env[1848]: time="2025-09-06T00:05:07.195254911Z" level=info msg="Forcibly stopping sandbox \"28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120\"" Sep 6 00:05:07.387703 env[1848]: 2025-09-06 00:05:07.306 [WARNING][5490] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--zgcsm-eth0", GenerateName:"calico-apiserver-797f9c6c85-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3def65b-39c6-44ff-8623-e889bb6d6c02", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 4, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"797f9c6c85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-61", ContainerID:"539494acd0258f9df9c9eb730289471e341c61dd2c24ad477092a1afc3ae12a1", Pod:"calico-apiserver-797f9c6c85-zgcsm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8aa86911e1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:05:07.387703 env[1848]: 2025-09-06 00:05:07.306 [INFO][5490] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" Sep 6 00:05:07.387703 env[1848]: 2025-09-06 00:05:07.306 [INFO][5490] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" iface="eth0" netns="" Sep 6 00:05:07.387703 env[1848]: 2025-09-06 00:05:07.307 [INFO][5490] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" Sep 6 00:05:07.387703 env[1848]: 2025-09-06 00:05:07.307 [INFO][5490] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" Sep 6 00:05:07.387703 env[1848]: 2025-09-06 00:05:07.358 [INFO][5497] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" HandleID="k8s-pod-network.28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" Workload="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--zgcsm-eth0" Sep 6 00:05:07.387703 env[1848]: 2025-09-06 00:05:07.358 [INFO][5497] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:05:07.387703 env[1848]: 2025-09-06 00:05:07.359 [INFO][5497] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:05:07.387703 env[1848]: 2025-09-06 00:05:07.379 [WARNING][5497] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" HandleID="k8s-pod-network.28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" Workload="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--zgcsm-eth0" Sep 6 00:05:07.387703 env[1848]: 2025-09-06 00:05:07.379 [INFO][5497] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" HandleID="k8s-pod-network.28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" Workload="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--zgcsm-eth0" Sep 6 00:05:07.387703 env[1848]: 2025-09-06 00:05:07.382 [INFO][5497] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:05:07.387703 env[1848]: 2025-09-06 00:05:07.384 [INFO][5490] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120" Sep 6 00:05:07.388683 env[1848]: time="2025-09-06T00:05:07.387745180Z" level=info msg="TearDown network for sandbox \"28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120\" successfully" Sep 6 00:05:07.394796 env[1848]: time="2025-09-06T00:05:07.394719965Z" level=info msg="RemovePodSandbox \"28a6d1bb787e121deaaf906bbdabed7fc3dd23b5c0dab628d66fa66cf7a3c120\" returns successfully" Sep 6 00:05:07.704731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount901791545.mount: Deactivated successfully. Sep 6 00:05:08.906796 systemd[1]: Started sshd@8-172.31.24.61:22-147.75.109.163:54612.service. Sep 6 00:05:08.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.24.61:22-147.75.109.163:54612 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:08.909206 kernel: kauditd_printk_skb: 21 callbacks suppressed Sep 6 00:05:08.909303 kernel: audit: type=1130 audit(1757117108.905:435): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.24.61:22-147.75.109.163:54612 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:09.014720 env[1848]: time="2025-09-06T00:05:09.010603156Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:05:09.022602 env[1848]: time="2025-09-06T00:05:09.022511954Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:05:09.028754 env[1848]: time="2025-09-06T00:05:09.028694909Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:05:09.036055 env[1848]: time="2025-09-06T00:05:09.035998413Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:05:09.040744 env[1848]: time="2025-09-06T00:05:09.038342681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 6 00:05:09.044137 env[1848]: time="2025-09-06T00:05:09.044001457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 6 00:05:09.047052 env[1848]: time="2025-09-06T00:05:09.046939226Z" level=info msg="CreateContainer within sandbox \"a0ab623f15e98608017ad9220884152e470e514588f60c0ff6dc05dd103e618e\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 6 00:05:09.107939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1012213648.mount: Deactivated successfully. Sep 6 00:05:09.136947 env[1848]: time="2025-09-06T00:05:09.136878542Z" level=info msg="CreateContainer within sandbox \"a0ab623f15e98608017ad9220884152e470e514588f60c0ff6dc05dd103e618e\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"bd8c590e3adb75480b1d8fdfd093d78e6d81f7d307ce449a129f1213933b39b6\"" Sep 6 00:05:09.140176 env[1848]: time="2025-09-06T00:05:09.139905816Z" level=info msg="StartContainer for \"bd8c590e3adb75480b1d8fdfd093d78e6d81f7d307ce449a129f1213933b39b6\"" Sep 6 00:05:09.141000 audit[5523]: USER_ACCT pid=5523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:09.158130 sshd[5523]: Accepted publickey for core from 147.75.109.163 port 54612 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:05:09.173444 kernel: audit: type=1101 audit(1757117109.141:436): pid=5523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:09.173630 kernel: audit: type=1103 audit(1757117109.159:437): pid=5523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:09.159000 audit[5523]: CRED_ACQ pid=5523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:09.180336 kernel: audit: type=1006 audit(1757117109.159:438): pid=5523 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Sep 6 00:05:09.181039 sshd[5523]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:05:09.159000 audit[5523]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffefbc4ea0 a2=3 a3=1 items=0 ppid=1 pid=5523 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:09.159000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:05:09.202324 kernel: audit: type=1300 audit(1757117109.159:438): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffefbc4ea0 a2=3 a3=1 items=0 ppid=1 pid=5523 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:09.202793 kernel: audit: type=1327 audit(1757117109.159:438): proctitle=737368643A20636F7265205B707269765D Sep 6 00:05:09.215897 systemd-logind[1837]: New session 9 of user core. Sep 6 00:05:09.224933 systemd[1]: Started session-9.scope. Sep 6 00:05:09.245000 audit[5523]: USER_START pid=5523 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:09.246000 audit[5539]: CRED_ACQ pid=5539 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:09.271553 kernel: audit: type=1105 audit(1757117109.245:439): pid=5523 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:09.271717 kernel: audit: type=1103 audit(1757117109.246:440): pid=5539 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:09.296010 systemd[1]: run-containerd-runc-k8s.io-bd8c590e3adb75480b1d8fdfd093d78e6d81f7d307ce449a129f1213933b39b6-runc.5Rij7f.mount: Deactivated successfully. Sep 6 00:05:09.414028 env[1848]: time="2025-09-06T00:05:09.413840050Z" level=info msg="StartContainer for \"bd8c590e3adb75480b1d8fdfd093d78e6d81f7d307ce449a129f1213933b39b6\" returns successfully" Sep 6 00:05:09.579699 sshd[5523]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:09.582000 audit[5523]: USER_END pid=5523 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:09.587231 systemd-logind[1837]: Session 9 logged out. Waiting for processes to exit. Sep 6 00:05:09.598843 systemd[1]: sshd@8-172.31.24.61:22-147.75.109.163:54612.service: Deactivated successfully. Sep 6 00:05:09.600710 systemd[1]: session-9.scope: Deactivated successfully. Sep 6 00:05:09.582000 audit[5523]: CRED_DISP pid=5523 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:09.614944 kernel: audit: type=1106 audit(1757117109.582:441): pid=5523 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:09.615095 kernel: audit: type=1104 audit(1757117109.582:442): pid=5523 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:09.608787 systemd-logind[1837]: Removed session 9. Sep 6 00:05:09.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.24.61:22-147.75.109.163:54612 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:09.919000 audit[5590]: NETFILTER_CFG table=filter:116 family=2 entries=14 op=nft_register_rule pid=5590 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:05:09.919000 audit[5590]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc57dbc00 a2=0 a3=1 items=0 ppid=3026 pid=5590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:09.919000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:05:09.925000 audit[5590]: NETFILTER_CFG table=nat:117 family=2 entries=20 op=nft_register_rule pid=5590 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:05:09.925000 audit[5590]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc57dbc00 a2=0 a3=1 items=0 ppid=3026 pid=5590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:09.925000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:05:10.922156 systemd[1]: run-containerd-runc-k8s.io-bd8c590e3adb75480b1d8fdfd093d78e6d81f7d307ce449a129f1213933b39b6-runc.mbln8X.mount: Deactivated successfully. Sep 6 00:05:12.720142 env[1848]: time="2025-09-06T00:05:12.720083557Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:05:12.725815 env[1848]: time="2025-09-06T00:05:12.725741225Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:05:12.729674 env[1848]: time="2025-09-06T00:05:12.729599642Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:05:12.733543 env[1848]: time="2025-09-06T00:05:12.733466831Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:05:12.734961 env[1848]: time="2025-09-06T00:05:12.734899610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 6 00:05:12.737658 env[1848]: time="2025-09-06T00:05:12.737602201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 6 00:05:12.742467 env[1848]: time="2025-09-06T00:05:12.741760276Z" level=info msg="CreateContainer within sandbox \"539494acd0258f9df9c9eb730289471e341c61dd2c24ad477092a1afc3ae12a1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 6 00:05:12.778706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2204412674.mount: Deactivated successfully. Sep 6 00:05:12.785656 env[1848]: time="2025-09-06T00:05:12.785585855Z" level=info msg="CreateContainer within sandbox \"539494acd0258f9df9c9eb730289471e341c61dd2c24ad477092a1afc3ae12a1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d8bd511821e25009967e7e8dd52a0a9639eccbfe90f57d4e3f5bc973ea512b41\"" Sep 6 00:05:12.790059 env[1848]: time="2025-09-06T00:05:12.789863866Z" level=info msg="StartContainer for \"d8bd511821e25009967e7e8dd52a0a9639eccbfe90f57d4e3f5bc973ea512b41\"" Sep 6 00:05:12.852722 systemd[1]: run-containerd-runc-k8s.io-d8bd511821e25009967e7e8dd52a0a9639eccbfe90f57d4e3f5bc973ea512b41-runc.iDC31e.mount: Deactivated successfully. Sep 6 00:05:12.956949 env[1848]: time="2025-09-06T00:05:12.956855183Z" level=info msg="StartContainer for \"d8bd511821e25009967e7e8dd52a0a9639eccbfe90f57d4e3f5bc973ea512b41\" returns successfully" Sep 6 00:05:13.892012 kubelet[2926]: I0906 00:05:13.891922 2926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-8wwqq" podStartSLOduration=33.780693529 podStartE2EDuration="41.891871089s" podCreationTimestamp="2025-09-06 00:04:32 +0000 UTC" firstStartedPulling="2025-09-06 00:05:00.931267236 +0000 UTC m=+57.068742992" lastFinishedPulling="2025-09-06 00:05:09.042444796 +0000 UTC m=+65.179920552" observedRunningTime="2025-09-06 00:05:09.872319157 +0000 UTC m=+66.009794925" watchObservedRunningTime="2025-09-06 00:05:13.891871089 +0000 UTC m=+70.029346833" Sep 6 00:05:13.945064 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 6 00:05:13.945280 kernel: audit: type=1325 audit(1757117113.935:446): table=filter:118 family=2 entries=14 op=nft_register_rule pid=5661 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:05:13.935000 audit[5661]: NETFILTER_CFG table=filter:118 family=2 entries=14 op=nft_register_rule pid=5661 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:05:13.935000 audit[5661]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc26e9490 a2=0 a3=1 items=0 ppid=3026 pid=5661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:13.957938 kernel: audit: type=1300 audit(1757117113.935:446): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc26e9490 a2=0 a3=1 items=0 ppid=3026 pid=5661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:13.935000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:05:13.964107 kernel: audit: type=1327 audit(1757117113.935:446): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:05:13.985965 kernel: audit: type=1325 audit(1757117113.966:447): table=nat:119 family=2 entries=20 op=nft_register_rule pid=5661 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:05:13.986117 kernel: audit: type=1300 audit(1757117113.966:447): arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc26e9490 a2=0 a3=1 items=0 ppid=3026 pid=5661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:13.966000 audit[5661]: NETFILTER_CFG table=nat:119 family=2 entries=20 op=nft_register_rule pid=5661 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:05:13.966000 audit[5661]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc26e9490 a2=0 a3=1 items=0 ppid=3026 pid=5661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:13.966000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:05:13.992863 kernel: audit: type=1327 audit(1757117113.966:447): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:05:14.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.24.61:22-147.75.109.163:42638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:14.608094 systemd[1]: Started sshd@9-172.31.24.61:22-147.75.109.163:42638.service. Sep 6 00:05:14.628388 kernel: audit: type=1130 audit(1757117114.607:448): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.24.61:22-147.75.109.163:42638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:14.866000 audit[5663]: USER_ACCT pid=5663 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:14.878706 sshd[5663]: Accepted publickey for core from 147.75.109.163 port 42638 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:05:14.898117 kernel: audit: type=1101 audit(1757117114.866:449): pid=5663 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:14.898307 kernel: audit: type=1103 audit(1757117114.886:450): pid=5663 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:14.886000 audit[5663]: CRED_ACQ pid=5663 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:14.888929 sshd[5663]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:05:14.905915 kernel: audit: type=1006 audit(1757117114.886:451): pid=5663 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Sep 6 00:05:14.886000 audit[5663]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff0d7d0c0 a2=3 a3=1 items=0 ppid=1 pid=5663 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:14.886000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:05:14.916310 systemd-logind[1837]: New session 10 of user core. Sep 6 00:05:14.918169 systemd[1]: Started session-10.scope. Sep 6 00:05:14.943000 audit[5663]: USER_START pid=5663 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:14.946000 audit[5666]: CRED_ACQ pid=5666 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:15.424357 sshd[5663]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:15.425000 audit[5663]: USER_END pid=5663 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:15.425000 audit[5663]: CRED_DISP pid=5663 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:15.429572 systemd[1]: sshd@9-172.31.24.61:22-147.75.109.163:42638.service: Deactivated successfully. Sep 6 00:05:15.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.24.61:22-147.75.109.163:42638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:15.431588 systemd[1]: session-10.scope: Deactivated successfully. Sep 6 00:05:15.431894 systemd-logind[1837]: Session 10 logged out. Waiting for processes to exit. Sep 6 00:05:15.435220 systemd-logind[1837]: Removed session 10. Sep 6 00:05:15.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.24.61:22-147.75.109.163:42648 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:15.450698 systemd[1]: Started sshd@10-172.31.24.61:22-147.75.109.163:42648.service. Sep 6 00:05:15.518910 systemd[1]: run-containerd-runc-k8s.io-bd8c590e3adb75480b1d8fdfd093d78e6d81f7d307ce449a129f1213933b39b6-runc.Cmgo0x.mount: Deactivated successfully. Sep 6 00:05:15.637513 kubelet[2926]: I0906 00:05:15.636851 2926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-797f9c6c85-zgcsm" podStartSLOduration=43.028219635 podStartE2EDuration="53.636824672s" podCreationTimestamp="2025-09-06 00:04:22 +0000 UTC" firstStartedPulling="2025-09-06 00:05:02.128627249 +0000 UTC m=+58.266103005" lastFinishedPulling="2025-09-06 00:05:12.737232274 +0000 UTC m=+68.874708042" observedRunningTime="2025-09-06 00:05:13.893692545 +0000 UTC m=+70.031168373" watchObservedRunningTime="2025-09-06 00:05:15.636824672 +0000 UTC m=+71.774300428" Sep 6 00:05:15.696000 audit[5696]: NETFILTER_CFG table=filter:120 family=2 entries=13 op=nft_register_rule pid=5696 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:05:15.696000 audit[5696]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffc5f35680 a2=0 a3=1 items=0 ppid=3026 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:15.696000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:05:15.703000 audit[5696]: NETFILTER_CFG table=nat:121 family=2 entries=27 op=nft_register_chain pid=5696 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:05:15.703000 audit[5696]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=9348 a0=3 a1=ffffc5f35680 a2=0 a3=1 items=0 ppid=3026 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:15.703000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:05:15.770000 audit[5685]: USER_ACCT pid=5685 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:15.773365 sshd[5685]: Accepted publickey for core from 147.75.109.163 port 42648 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:05:15.775000 audit[5685]: CRED_ACQ pid=5685 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:15.776000 audit[5685]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdd65cf00 a2=3 a3=1 items=0 ppid=1 pid=5685 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:15.776000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:05:15.778992 sshd[5685]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:05:15.799886 systemd-logind[1837]: New session 11 of user core. Sep 6 00:05:15.801245 systemd[1]: Started session-11.scope. Sep 6 00:05:15.828000 audit[5685]: USER_START pid=5685 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:15.832000 audit[5702]: CRED_ACQ pid=5702 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:15.898000 audit[5704]: NETFILTER_CFG table=filter:122 family=2 entries=11 op=nft_register_rule pid=5704 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:05:15.898000 audit[5704]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffc49b7040 a2=0 a3=1 items=0 ppid=3026 pid=5704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:15.898000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:05:15.904000 audit[5704]: NETFILTER_CFG table=nat:123 family=2 entries=29 op=nft_register_chain pid=5704 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:05:15.904000 audit[5704]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10116 a0=3 a1=ffffc49b7040 a2=0 a3=1 items=0 ppid=3026 pid=5704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:15.904000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:05:16.256494 sshd[5685]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:16.257000 audit[5685]: USER_END pid=5685 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:16.257000 audit[5685]: CRED_DISP pid=5685 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:16.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.24.61:22-147.75.109.163:42648 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:16.262146 systemd[1]: sshd@10-172.31.24.61:22-147.75.109.163:42648.service: Deactivated successfully. Sep 6 00:05:16.263807 systemd[1]: session-11.scope: Deactivated successfully. Sep 6 00:05:16.266058 systemd-logind[1837]: Session 11 logged out. Waiting for processes to exit. Sep 6 00:05:16.268935 systemd-logind[1837]: Removed session 11. Sep 6 00:05:16.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.24.61:22-147.75.109.163:42656 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:16.279433 systemd[1]: Started sshd@11-172.31.24.61:22-147.75.109.163:42656.service. Sep 6 00:05:16.484000 audit[5712]: USER_ACCT pid=5712 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:16.485964 sshd[5712]: Accepted publickey for core from 147.75.109.163 port 42656 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:05:16.486000 audit[5712]: CRED_ACQ pid=5712 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:16.486000 audit[5712]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc90ae420 a2=3 a3=1 items=0 ppid=1 pid=5712 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:16.486000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:05:16.488656 sshd[5712]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:05:16.497975 systemd[1]: Started session-12.scope. Sep 6 00:05:16.504301 systemd-logind[1837]: New session 12 of user core. Sep 6 00:05:16.521000 audit[5712]: USER_START pid=5712 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:16.524000 audit[5715]: CRED_ACQ pid=5715 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:16.613030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4143837827.mount: Deactivated successfully. Sep 6 00:05:16.689058 env[1848]: time="2025-09-06T00:05:16.688971507Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:05:16.701285 env[1848]: time="2025-09-06T00:05:16.701163352Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:05:16.711297 env[1848]: time="2025-09-06T00:05:16.711225039Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:05:16.718226 env[1848]: time="2025-09-06T00:05:16.718133785Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:05:16.719787 env[1848]: time="2025-09-06T00:05:16.719702024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 6 00:05:16.728642 env[1848]: time="2025-09-06T00:05:16.728401642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 6 00:05:16.734811 env[1848]: time="2025-09-06T00:05:16.734735785Z" level=info msg="CreateContainer within sandbox \"c63311d4d5b771442da034ffe7043a2e8151ecb16ce4e44c545dd42066480e2b\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 6 00:05:16.791255 env[1848]: time="2025-09-06T00:05:16.791086445Z" level=info msg="CreateContainer within sandbox \"c63311d4d5b771442da034ffe7043a2e8151ecb16ce4e44c545dd42066480e2b\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"44621f213b9a5bd0f42d32ed892be2cc4638d736f98ac4871526f9f2ca1f37d0\"" Sep 6 00:05:16.794610 env[1848]: time="2025-09-06T00:05:16.794540020Z" level=info msg="StartContainer for \"44621f213b9a5bd0f42d32ed892be2cc4638d736f98ac4871526f9f2ca1f37d0\"" Sep 6 00:05:16.877403 systemd[1]: run-containerd-runc-k8s.io-44621f213b9a5bd0f42d32ed892be2cc4638d736f98ac4871526f9f2ca1f37d0-runc.8bMAF9.mount: Deactivated successfully. Sep 6 00:05:16.894372 sshd[5712]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:16.896000 audit[5712]: USER_END pid=5712 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:16.899000 audit[5712]: CRED_DISP pid=5712 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:16.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.24.61:22-147.75.109.163:42656 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:16.907237 systemd[1]: sshd@11-172.31.24.61:22-147.75.109.163:42656.service: Deactivated successfully. Sep 6 00:05:16.909729 systemd[1]: session-12.scope: Deactivated successfully. Sep 6 00:05:16.913733 systemd-logind[1837]: Session 12 logged out. Waiting for processes to exit. Sep 6 00:05:16.927605 systemd-logind[1837]: Removed session 12. Sep 6 00:05:16.997218 env[1848]: time="2025-09-06T00:05:16.997092603Z" level=info msg="StartContainer for \"44621f213b9a5bd0f42d32ed892be2cc4638d736f98ac4871526f9f2ca1f37d0\" returns successfully" Sep 6 00:05:17.982000 audit[5758]: NETFILTER_CFG table=filter:124 family=2 entries=9 op=nft_register_rule pid=5758 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:05:17.982000 audit[5758]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=fffff8dd71a0 a2=0 a3=1 items=0 ppid=3026 pid=5758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:17.982000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:05:17.987000 audit[5758]: NETFILTER_CFG table=nat:125 family=2 entries=31 op=nft_register_chain pid=5758 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:05:17.987000 audit[5758]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10884 a0=3 a1=fffff8dd71a0 a2=0 a3=1 items=0 ppid=3026 pid=5758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:17.987000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:05:18.528800 env[1848]: time="2025-09-06T00:05:18.528719700Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:05:18.535016 env[1848]: time="2025-09-06T00:05:18.534252966Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:05:18.537838 env[1848]: time="2025-09-06T00:05:18.537760775Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:05:18.541096 env[1848]: time="2025-09-06T00:05:18.541013165Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:05:18.543038 env[1848]: time="2025-09-06T00:05:18.542976366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 6 00:05:18.549496 env[1848]: time="2025-09-06T00:05:18.547689317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 6 00:05:18.552452 env[1848]: time="2025-09-06T00:05:18.552331587Z" level=info msg="CreateContainer within sandbox \"6cee43d7be3a590307a93668c8808b67c59fa611300d2d84e6230853b9e04207\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 6 00:05:18.590310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2061149829.mount: Deactivated successfully. Sep 6 00:05:18.598965 env[1848]: time="2025-09-06T00:05:18.598845674Z" level=info msg="CreateContainer within sandbox \"6cee43d7be3a590307a93668c8808b67c59fa611300d2d84e6230853b9e04207\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9f78cbfdd02d6d207f46f6fd05d2dbc6547dbf0a3951a1039aca5ea0b430197d\"" Sep 6 00:05:18.601545 env[1848]: time="2025-09-06T00:05:18.599954888Z" level=info msg="StartContainer for \"9f78cbfdd02d6d207f46f6fd05d2dbc6547dbf0a3951a1039aca5ea0b430197d\"" Sep 6 00:05:18.667141 systemd[1]: run-containerd-runc-k8s.io-9f78cbfdd02d6d207f46f6fd05d2dbc6547dbf0a3951a1039aca5ea0b430197d-runc.SNWr1i.mount: Deactivated successfully. Sep 6 00:05:18.760249 env[1848]: time="2025-09-06T00:05:18.755995595Z" level=info msg="StartContainer for \"9f78cbfdd02d6d207f46f6fd05d2dbc6547dbf0a3951a1039aca5ea0b430197d\" returns successfully" Sep 6 00:05:21.919125 systemd[1]: Started sshd@12-172.31.24.61:22-147.75.109.163:55424.service. Sep 6 00:05:21.929724 kernel: kauditd_printk_skb: 47 callbacks suppressed Sep 6 00:05:21.929897 kernel: audit: type=1130 audit(1757117121.919:481): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.24.61:22-147.75.109.163:55424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:21.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.24.61:22-147.75.109.163:55424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:22.129000 audit[5799]: USER_ACCT pid=5799 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:22.131547 sshd[5799]: Accepted publickey for core from 147.75.109.163 port 55424 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:05:22.137503 sshd[5799]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:05:22.129000 audit[5799]: CRED_ACQ pid=5799 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:22.155687 kernel: audit: type=1101 audit(1757117122.129:482): pid=5799 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:22.155951 kernel: audit: type=1103 audit(1757117122.129:483): pid=5799 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:22.162281 kernel: audit: type=1006 audit(1757117122.129:484): pid=5799 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Sep 6 00:05:22.162912 kernel: audit: type=1300 audit(1757117122.129:484): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeb7afa90 a2=3 a3=1 items=0 ppid=1 pid=5799 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:22.129000 audit[5799]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeb7afa90 a2=3 a3=1 items=0 ppid=1 pid=5799 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:22.129000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:05:22.177943 kernel: audit: type=1327 audit(1757117122.129:484): proctitle=737368643A20636F7265205B707269765D Sep 6 00:05:22.182678 systemd-logind[1837]: New session 13 of user core. Sep 6 00:05:22.187756 systemd[1]: Started session-13.scope. Sep 6 00:05:22.212000 audit[5799]: USER_START pid=5799 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:22.226000 audit[5802]: CRED_ACQ pid=5802 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:22.240221 kernel: audit: type=1105 audit(1757117122.212:485): pid=5799 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:22.240361 kernel: audit: type=1103 audit(1757117122.226:486): pid=5802 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:22.543641 sshd[5799]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:22.547000 audit[5799]: USER_END pid=5799 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:22.560710 systemd[1]: sshd@12-172.31.24.61:22-147.75.109.163:55424.service: Deactivated successfully. Sep 6 00:05:22.563905 systemd[1]: session-13.scope: Deactivated successfully. Sep 6 00:05:22.547000 audit[5799]: CRED_DISP pid=5799 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:22.576635 kernel: audit: type=1106 audit(1757117122.547:487): pid=5799 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:22.576825 kernel: audit: type=1104 audit(1757117122.547:488): pid=5799 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:22.577917 systemd-logind[1837]: Session 13 logged out. Waiting for processes to exit. Sep 6 00:05:22.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.24.61:22-147.75.109.163:55424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:22.580155 systemd-logind[1837]: Removed session 13. Sep 6 00:05:22.590279 env[1848]: time="2025-09-06T00:05:22.590154706Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:05:22.596403 env[1848]: time="2025-09-06T00:05:22.596315256Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:05:22.600526 env[1848]: time="2025-09-06T00:05:22.600466914Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:05:22.604173 env[1848]: time="2025-09-06T00:05:22.604111486Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:05:22.605459 env[1848]: time="2025-09-06T00:05:22.605389815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 6 00:05:22.613401 env[1848]: time="2025-09-06T00:05:22.609655593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 6 00:05:22.661269 env[1848]: time="2025-09-06T00:05:22.660834529Z" level=info msg="CreateContainer within sandbox \"eb7a032dbd814d4baf823b59cd5d62efd8ad669504c619c7ca7cf8f91ed031e5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 6 00:05:22.698079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1063378150.mount: Deactivated successfully. Sep 6 00:05:22.701042 env[1848]: time="2025-09-06T00:05:22.700963757Z" level=info msg="CreateContainer within sandbox \"eb7a032dbd814d4baf823b59cd5d62efd8ad669504c619c7ca7cf8f91ed031e5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3ef2335b73315b9a545c9095fbad060acfac8f324bb91b5132784cdf07c8300f\"" Sep 6 00:05:22.702427 env[1848]: time="2025-09-06T00:05:22.702365723Z" level=info msg="StartContainer for \"3ef2335b73315b9a545c9095fbad060acfac8f324bb91b5132784cdf07c8300f\"" Sep 6 00:05:22.863786 env[1848]: time="2025-09-06T00:05:22.863651394Z" level=info msg="StartContainer for \"3ef2335b73315b9a545c9095fbad060acfac8f324bb91b5132784cdf07c8300f\" returns successfully" Sep 6 00:05:22.969839 env[1848]: time="2025-09-06T00:05:22.969781697Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:05:22.976172 env[1848]: time="2025-09-06T00:05:22.976107021Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:05:22.981622 env[1848]: time="2025-09-06T00:05:22.981567008Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:05:22.986797 env[1848]: time="2025-09-06T00:05:22.986737986Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:05:22.994172 env[1848]: time="2025-09-06T00:05:22.988664414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 6 00:05:23.002100 env[1848]: time="2025-09-06T00:05:23.002029858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 6 00:05:23.004041 env[1848]: time="2025-09-06T00:05:23.003978610Z" level=info msg="CreateContainer within sandbox \"48dc23c262ce56081ef0eb0d8b65c1734f9b65c6f0f104c00de819bcab7fac62\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 6 00:05:23.033676 kubelet[2926]: I0906 00:05:23.032899 2926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-b986b57c8-g24l8" podStartSLOduration=8.929231774 podStartE2EDuration="26.032873383s" podCreationTimestamp="2025-09-06 00:04:57 +0000 UTC" firstStartedPulling="2025-09-06 00:04:59.623383396 +0000 UTC m=+55.760859152" lastFinishedPulling="2025-09-06 00:05:16.727025017 +0000 UTC m=+72.864500761" observedRunningTime="2025-09-06 00:05:17.950656062 +0000 UTC m=+74.088131938" watchObservedRunningTime="2025-09-06 00:05:23.032873383 +0000 UTC m=+79.170349151" Sep 6 00:05:23.045664 env[1848]: time="2025-09-06T00:05:23.045589026Z" level=info msg="CreateContainer within sandbox \"48dc23c262ce56081ef0eb0d8b65c1734f9b65c6f0f104c00de819bcab7fac62\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d0244272c5c0d3cf3f70fbfd423ef61c19993f0dc3e69ba749575dadd0e49781\"" Sep 6 00:05:23.047224 env[1848]: time="2025-09-06T00:05:23.047147980Z" level=info msg="StartContainer for \"d0244272c5c0d3cf3f70fbfd423ef61c19993f0dc3e69ba749575dadd0e49781\"" Sep 6 00:05:23.261933 env[1848]: time="2025-09-06T00:05:23.261845511Z" level=info msg="StartContainer for \"d0244272c5c0d3cf3f70fbfd423ef61c19993f0dc3e69ba749575dadd0e49781\" returns successfully" Sep 6 00:05:24.045832 kubelet[2926]: I0906 00:05:24.044786 2926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-764999b789-c2t8l" podStartSLOduration=35.817566436 podStartE2EDuration="53.044762964s" podCreationTimestamp="2025-09-06 00:04:31 +0000 UTC" firstStartedPulling="2025-09-06 00:05:05.380932035 +0000 UTC m=+61.518407779" lastFinishedPulling="2025-09-06 00:05:22.608128551 +0000 UTC m=+78.745604307" observedRunningTime="2025-09-06 00:05:23.03662492 +0000 UTC m=+79.174100688" watchObservedRunningTime="2025-09-06 00:05:24.044762964 +0000 UTC m=+80.182238720" Sep 6 00:05:24.089000 audit[5909]: NETFILTER_CFG table=filter:126 family=2 entries=8 op=nft_register_rule pid=5909 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:05:24.089000 audit[5909]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffe4988060 a2=0 a3=1 items=0 ppid=3026 pid=5909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:24.089000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:05:24.097000 audit[5909]: NETFILTER_CFG table=nat:127 family=2 entries=34 op=nft_register_rule pid=5909 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:05:24.097000 audit[5909]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10884 a0=3 a1=ffffe4988060 a2=0 a3=1 items=0 ppid=3026 pid=5909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:24.097000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:05:24.135961 systemd[1]: run-containerd-runc-k8s.io-3ef2335b73315b9a545c9095fbad060acfac8f324bb91b5132784cdf07c8300f-runc.ANzFNC.mount: Deactivated successfully. Sep 6 00:05:24.226149 kubelet[2926]: I0906 00:05:24.226042 2926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-797f9c6c85-hw6d9" podStartSLOduration=44.751605645 podStartE2EDuration="1m2.226015926s" podCreationTimestamp="2025-09-06 00:04:22 +0000 UTC" firstStartedPulling="2025-09-06 00:05:05.525598574 +0000 UTC m=+61.663074342" lastFinishedPulling="2025-09-06 00:05:23.000008771 +0000 UTC m=+79.137484623" observedRunningTime="2025-09-06 00:05:24.046006979 +0000 UTC m=+80.183482759" watchObservedRunningTime="2025-09-06 00:05:24.226015926 +0000 UTC m=+80.363491682" Sep 6 00:05:25.007605 kubelet[2926]: I0906 00:05:25.007546 2926 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:05:25.526724 env[1848]: time="2025-09-06T00:05:25.526645467Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:05:25.533162 env[1848]: time="2025-09-06T00:05:25.533081569Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:05:25.540750 env[1848]: time="2025-09-06T00:05:25.540654634Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:05:25.550103 env[1848]: time="2025-09-06T00:05:25.550023748Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:05:25.554847 env[1848]: time="2025-09-06T00:05:25.552170139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 6 00:05:25.563782 env[1848]: time="2025-09-06T00:05:25.563700176Z" level=info msg="CreateContainer within sandbox \"6cee43d7be3a590307a93668c8808b67c59fa611300d2d84e6230853b9e04207\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 6 00:05:25.608440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2207048093.mount: Deactivated successfully. Sep 6 00:05:25.617623 env[1848]: time="2025-09-06T00:05:25.617537666Z" level=info msg="CreateContainer within sandbox \"6cee43d7be3a590307a93668c8808b67c59fa611300d2d84e6230853b9e04207\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"515438a6a02684df9dc4bff3756c3cc13c10c9f1b9acce8ea741ffe9d2cb6450\"" Sep 6 00:05:25.618803 env[1848]: time="2025-09-06T00:05:25.618720495Z" level=info msg="StartContainer for \"515438a6a02684df9dc4bff3756c3cc13c10c9f1b9acce8ea741ffe9d2cb6450\"" Sep 6 00:05:25.789479 env[1848]: time="2025-09-06T00:05:25.789289211Z" level=info msg="StartContainer for \"515438a6a02684df9dc4bff3756c3cc13c10c9f1b9acce8ea741ffe9d2cb6450\" returns successfully" Sep 6 00:05:26.450159 kubelet[2926]: I0906 00:05:26.450087 2926 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 6 00:05:26.450159 kubelet[2926]: I0906 00:05:26.450167 2926 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 6 00:05:27.592486 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 6 00:05:27.592689 kernel: audit: type=1130 audit(1757117127.579:492): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.24.61:22-147.75.109.163:55436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:27.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.24.61:22-147.75.109.163:55436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:27.580650 systemd[1]: Started sshd@13-172.31.24.61:22-147.75.109.163:55436.service. Sep 6 00:05:27.812000 audit[5963]: USER_ACCT pid=5963 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:27.824779 sshd[5963]: Accepted publickey for core from 147.75.109.163 port 55436 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:05:27.826317 kernel: audit: type=1101 audit(1757117127.812:493): pid=5963 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:27.826000 audit[5963]: CRED_ACQ pid=5963 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:27.839898 sshd[5963]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:05:27.849466 kernel: audit: type=1103 audit(1757117127.826:494): pid=5963 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:27.849653 kernel: audit: type=1006 audit(1757117127.826:495): pid=5963 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Sep 6 00:05:27.826000 audit[5963]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe12b9cd0 a2=3 a3=1 items=0 ppid=1 pid=5963 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:27.864559 kernel: audit: type=1300 audit(1757117127.826:495): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe12b9cd0 a2=3 a3=1 items=0 ppid=1 pid=5963 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:27.826000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:05:27.869667 kernel: audit: type=1327 audit(1757117127.826:495): proctitle=737368643A20636F7265205B707269765D Sep 6 00:05:27.873027 systemd[1]: Started session-14.scope. Sep 6 00:05:27.873518 systemd-logind[1837]: New session 14 of user core. Sep 6 00:05:27.908000 audit[5963]: USER_START pid=5963 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:27.925242 kernel: audit: type=1105 audit(1757117127.908:496): pid=5963 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:27.924000 audit[5966]: CRED_ACQ pid=5966 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:27.937268 kernel: audit: type=1103 audit(1757117127.924:497): pid=5966 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:28.289337 sshd[5963]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:28.290000 audit[5963]: USER_END pid=5963 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:28.302015 systemd[1]: sshd@13-172.31.24.61:22-147.75.109.163:55436.service: Deactivated successfully. Sep 6 00:05:28.304819 systemd-logind[1837]: Session 14 logged out. Waiting for processes to exit. Sep 6 00:05:28.296000 audit[5963]: CRED_DISP pid=5963 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:28.307587 systemd[1]: session-14.scope: Deactivated successfully. Sep 6 00:05:28.314868 kernel: audit: type=1106 audit(1757117128.290:498): pid=5963 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:28.315064 kernel: audit: type=1104 audit(1757117128.296:499): pid=5963 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:28.316245 systemd-logind[1837]: Removed session 14. Sep 6 00:05:28.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.24.61:22-147.75.109.163:55436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:33.313909 systemd[1]: Started sshd@14-172.31.24.61:22-147.75.109.163:46526.service. Sep 6 00:05:33.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.24.61:22-147.75.109.163:46526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:33.318030 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 00:05:33.318200 kernel: audit: type=1130 audit(1757117133.314:501): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.24.61:22-147.75.109.163:46526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:33.503619 sshd[5976]: Accepted publickey for core from 147.75.109.163 port 46526 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:05:33.502000 audit[5976]: USER_ACCT pid=5976 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:33.518755 sshd[5976]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:05:33.515000 audit[5976]: CRED_ACQ pid=5976 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:33.537532 kernel: audit: type=1101 audit(1757117133.502:502): pid=5976 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:33.537687 kernel: audit: type=1103 audit(1757117133.515:503): pid=5976 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:33.541277 systemd-logind[1837]: New session 15 of user core. Sep 6 00:05:33.543102 systemd[1]: Started session-15.scope. Sep 6 00:05:33.571281 kernel: audit: type=1006 audit(1757117133.516:504): pid=5976 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Sep 6 00:05:33.516000 audit[5976]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff9616100 a2=3 a3=1 items=0 ppid=1 pid=5976 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:33.588144 kernel: audit: type=1300 audit(1757117133.516:504): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff9616100 a2=3 a3=1 items=0 ppid=1 pid=5976 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:33.516000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:05:33.598728 kernel: audit: type=1327 audit(1757117133.516:504): proctitle=737368643A20636F7265205B707269765D Sep 6 00:05:33.555000 audit[5976]: USER_START pid=5976 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:33.614727 kernel: audit: type=1105 audit(1757117133.555:505): pid=5976 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:33.558000 audit[5979]: CRED_ACQ pid=5979 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:33.632536 kernel: audit: type=1103 audit(1757117133.558:506): pid=5979 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:33.938914 sshd[5976]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:33.940000 audit[5976]: USER_END pid=5976 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:33.954494 systemd[1]: sshd@14-172.31.24.61:22-147.75.109.163:46526.service: Deactivated successfully. Sep 6 00:05:33.956827 systemd[1]: session-15.scope: Deactivated successfully. Sep 6 00:05:33.957620 systemd-logind[1837]: Session 15 logged out. Waiting for processes to exit. Sep 6 00:05:33.940000 audit[5976]: CRED_DISP pid=5976 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:33.968236 kernel: audit: type=1106 audit(1757117133.940:507): pid=5976 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:33.968438 kernel: audit: type=1104 audit(1757117133.940:508): pid=5976 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:33.970010 systemd-logind[1837]: Removed session 15. Sep 6 00:05:33.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.24.61:22-147.75.109.163:46526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:38.216644 systemd[1]: run-containerd-runc-k8s.io-5349da681c3d6e91eb0f99d9ea65366fd853387fa9a924ad76cbb13b820900a4-runc.1Qk5wj.mount: Deactivated successfully. Sep 6 00:05:38.966120 systemd[1]: Started sshd@15-172.31.24.61:22-147.75.109.163:46542.service. Sep 6 00:05:38.978432 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 00:05:38.978666 kernel: audit: type=1130 audit(1757117138.966:510): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.24.61:22-147.75.109.163:46542 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:38.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.24.61:22-147.75.109.163:46542 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:39.189000 audit[6010]: USER_ACCT pid=6010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:39.193452 sshd[6010]: Accepted publickey for core from 147.75.109.163 port 46542 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:05:39.203278 kernel: audit: type=1101 audit(1757117139.189:511): pid=6010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:39.201000 audit[6010]: CRED_ACQ pid=6010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:39.214340 sshd[6010]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:05:39.221020 kernel: audit: type=1103 audit(1757117139.201:512): pid=6010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:39.221168 kernel: audit: type=1006 audit(1757117139.201:513): pid=6010 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Sep 6 00:05:39.201000 audit[6010]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc57ed7b0 a2=3 a3=1 items=0 ppid=1 pid=6010 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:39.232944 kernel: audit: type=1300 audit(1757117139.201:513): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc57ed7b0 a2=3 a3=1 items=0 ppid=1 pid=6010 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:39.201000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:05:39.237057 kernel: audit: type=1327 audit(1757117139.201:513): proctitle=737368643A20636F7265205B707269765D Sep 6 00:05:39.243572 systemd-logind[1837]: New session 16 of user core. Sep 6 00:05:39.245991 systemd[1]: Started session-16.scope. Sep 6 00:05:39.255000 audit[6010]: USER_START pid=6010 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:39.268000 audit[6013]: CRED_ACQ pid=6013 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:39.279493 kernel: audit: type=1105 audit(1757117139.255:514): pid=6010 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:39.279604 kernel: audit: type=1103 audit(1757117139.268:515): pid=6013 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:39.534311 sshd[6010]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:39.536000 audit[6010]: USER_END pid=6010 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:39.547000 audit[6010]: CRED_DISP pid=6010 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:39.559847 kernel: audit: type=1106 audit(1757117139.536:516): pid=6010 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:39.560019 kernel: audit: type=1104 audit(1757117139.547:517): pid=6010 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:39.560891 systemd[1]: sshd@15-172.31.24.61:22-147.75.109.163:46542.service: Deactivated successfully. Sep 6 00:05:39.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.24.61:22-147.75.109.163:46542 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:39.564288 systemd-logind[1837]: Session 16 logged out. Waiting for processes to exit. Sep 6 00:05:39.571360 systemd[1]: Started sshd@16-172.31.24.61:22-147.75.109.163:46546.service. Sep 6 00:05:39.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.24.61:22-147.75.109.163:46546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:39.573130 systemd[1]: session-16.scope: Deactivated successfully. Sep 6 00:05:39.575651 systemd-logind[1837]: Removed session 16. Sep 6 00:05:39.754134 sshd[6023]: Accepted publickey for core from 147.75.109.163 port 46546 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:05:39.752000 audit[6023]: USER_ACCT pid=6023 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:39.754000 audit[6023]: CRED_ACQ pid=6023 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:39.754000 audit[6023]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe7fec650 a2=3 a3=1 items=0 ppid=1 pid=6023 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:39.754000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:05:39.756887 sshd[6023]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:05:39.766458 systemd-logind[1837]: New session 17 of user core. Sep 6 00:05:39.766474 systemd[1]: Started session-17.scope. Sep 6 00:05:39.779000 audit[6023]: USER_START pid=6023 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:39.783000 audit[6026]: CRED_ACQ pid=6026 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:40.353061 sshd[6023]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:40.354000 audit[6023]: USER_END pid=6023 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:40.354000 audit[6023]: CRED_DISP pid=6023 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:40.358517 systemd-logind[1837]: Session 17 logged out. Waiting for processes to exit. Sep 6 00:05:40.360845 systemd[1]: sshd@16-172.31.24.61:22-147.75.109.163:46546.service: Deactivated successfully. Sep 6 00:05:40.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.24.61:22-147.75.109.163:46546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:40.362840 systemd[1]: session-17.scope: Deactivated successfully. Sep 6 00:05:40.366491 systemd-logind[1837]: Removed session 17. Sep 6 00:05:40.381077 systemd[1]: Started sshd@17-172.31.24.61:22-147.75.109.163:44852.service. Sep 6 00:05:40.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.24.61:22-147.75.109.163:44852 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:40.566000 audit[6035]: USER_ACCT pid=6035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:40.567731 sshd[6035]: Accepted publickey for core from 147.75.109.163 port 44852 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:05:40.568000 audit[6035]: CRED_ACQ pid=6035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:40.568000 audit[6035]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff76d50c0 a2=3 a3=1 items=0 ppid=1 pid=6035 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:40.568000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:05:40.571130 sshd[6035]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:05:40.580288 systemd-logind[1837]: New session 18 of user core. Sep 6 00:05:40.581244 systemd[1]: Started session-18.scope. Sep 6 00:05:40.594000 audit[6035]: USER_START pid=6035 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:40.597000 audit[6038]: CRED_ACQ pid=6038 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:42.677130 kubelet[2926]: I0906 00:05:42.677088 2926 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:05:42.716836 kubelet[2926]: I0906 00:05:42.716734 2926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zpqph" podStartSLOduration=51.214148222 podStartE2EDuration="1m11.716713526s" podCreationTimestamp="2025-09-06 00:04:31 +0000 UTC" firstStartedPulling="2025-09-06 00:05:05.054817007 +0000 UTC m=+61.192292763" lastFinishedPulling="2025-09-06 00:05:25.557382323 +0000 UTC m=+81.694858067" observedRunningTime="2025-09-06 00:05:26.044549724 +0000 UTC m=+82.182025516" watchObservedRunningTime="2025-09-06 00:05:42.716713526 +0000 UTC m=+98.854189282" Sep 6 00:05:42.796000 audit[6053]: NETFILTER_CFG table=filter:128 family=2 entries=8 op=nft_register_rule pid=6053 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:05:42.796000 audit[6053]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=fffff3571610 a2=0 a3=1 items=0 ppid=3026 pid=6053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:42.796000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:05:42.804000 audit[6053]: NETFILTER_CFG table=nat:129 family=2 entries=38 op=nft_register_chain pid=6053 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:05:42.804000 audit[6053]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=12772 a0=3 a1=fffff3571610 a2=0 a3=1 items=0 ppid=3026 pid=6053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:42.804000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:05:44.118597 kernel: kauditd_printk_skb: 26 callbacks suppressed Sep 6 00:05:44.118840 kernel: audit: type=1325 audit(1757117144.108:536): table=filter:130 family=2 entries=20 op=nft_register_rule pid=6056 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:05:44.108000 audit[6056]: NETFILTER_CFG table=filter:130 family=2 entries=20 op=nft_register_rule pid=6056 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:05:44.108000 audit[6056]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11944 a0=3 a1=ffffe1da09c0 a2=0 a3=1 items=0 ppid=3026 pid=6056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:44.134963 sshd[6035]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:44.141462 systemd[1]: sshd@17-172.31.24.61:22-147.75.109.163:44852.service: Deactivated successfully. Sep 6 00:05:44.144311 systemd[1]: session-18.scope: Deactivated successfully. Sep 6 00:05:44.144930 systemd-logind[1837]: Session 18 logged out. Waiting for processes to exit. Sep 6 00:05:44.146948 systemd-logind[1837]: Removed session 18. Sep 6 00:05:44.108000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:05:44.161655 kernel: audit: type=1300 audit(1757117144.108:536): arch=c00000b7 syscall=211 success=yes exit=11944 a0=3 a1=ffffe1da09c0 a2=0 a3=1 items=0 ppid=3026 pid=6056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:44.161804 kernel: audit: type=1327 audit(1757117144.108:536): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:05:44.161807 systemd[1]: Started sshd@18-172.31.24.61:22-147.75.109.163:44860.service. Sep 6 00:05:44.194150 kernel: audit: type=1106 audit(1757117144.136:537): pid=6035 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:44.136000 audit[6035]: USER_END pid=6035 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:44.226572 kernel: audit: type=1104 audit(1757117144.136:538): pid=6035 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:44.136000 audit[6035]: CRED_DISP pid=6035 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:44.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.24.61:22-147.75.109.163:44852 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:44.244273 kernel: audit: type=1131 audit(1757117144.140:539): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.24.61:22-147.75.109.163:44852 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:44.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.24.61:22-147.75.109.163:44860 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:44.265698 kernel: audit: type=1130 audit(1757117144.161:540): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.24.61:22-147.75.109.163:44860 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:44.226000 audit[6056]: NETFILTER_CFG table=nat:131 family=2 entries=26 op=nft_register_rule pid=6056 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:05:44.280162 kernel: audit: type=1325 audit(1757117144.226:541): table=nat:131 family=2 entries=26 op=nft_register_rule pid=6056 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:05:44.226000 audit[6056]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8076 a0=3 a1=ffffe1da09c0 a2=0 a3=1 items=0 ppid=3026 pid=6056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:44.298275 kernel: audit: type=1300 audit(1757117144.226:541): arch=c00000b7 syscall=211 success=yes exit=8076 a0=3 a1=ffffe1da09c0 a2=0 a3=1 items=0 ppid=3026 pid=6056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:44.226000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:05:44.317140 kernel: audit: type=1327 audit(1757117144.226:541): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:05:44.408555 sshd[6059]: Accepted publickey for core from 147.75.109.163 port 44860 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:05:44.407000 audit[6059]: USER_ACCT pid=6059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:44.409000 audit[6059]: CRED_ACQ pid=6059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:44.409000 audit[6059]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffea552f70 a2=3 a3=1 items=0 ppid=1 pid=6059 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:44.409000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:05:44.412378 sshd[6059]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:05:44.426776 systemd[1]: Started session-19.scope. Sep 6 00:05:44.429495 systemd-logind[1837]: New session 19 of user core. Sep 6 00:05:44.463000 audit[6059]: USER_START pid=6059 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:44.472000 audit[6063]: CRED_ACQ pid=6063 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:44.539000 audit[6064]: NETFILTER_CFG table=filter:132 family=2 entries=32 op=nft_register_rule pid=6064 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:05:44.539000 audit[6064]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11944 a0=3 a1=ffffe4c25ec0 a2=0 a3=1 items=0 ppid=3026 pid=6064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:44.539000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:05:44.548000 audit[6064]: NETFILTER_CFG table=nat:133 family=2 entries=26 op=nft_register_rule pid=6064 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:05:44.548000 audit[6064]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8076 a0=3 a1=ffffe4c25ec0 a2=0 a3=1 items=0 ppid=3026 pid=6064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:44.548000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:05:45.328556 sshd[6059]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:45.351018 systemd[1]: run-containerd-runc-k8s.io-bd8c590e3adb75480b1d8fdfd093d78e6d81f7d307ce449a129f1213933b39b6-runc.W6lqsj.mount: Deactivated successfully. Sep 6 00:05:45.366588 systemd[1]: Started sshd@19-172.31.24.61:22-147.75.109.163:44868.service. Sep 6 00:05:45.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.24.61:22-147.75.109.163:44868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:45.372000 audit[6059]: USER_END pid=6059 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:45.373000 audit[6059]: CRED_DISP pid=6059 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:45.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.24.61:22-147.75.109.163:44860 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:45.376720 systemd[1]: sshd@18-172.31.24.61:22-147.75.109.163:44860.service: Deactivated successfully. Sep 6 00:05:45.378698 systemd[1]: session-19.scope: Deactivated successfully. Sep 6 00:05:45.403507 systemd-logind[1837]: Session 19 logged out. Waiting for processes to exit. Sep 6 00:05:45.445540 systemd[1]: run-containerd-runc-k8s.io-3ef2335b73315b9a545c9095fbad060acfac8f324bb91b5132784cdf07c8300f-runc.uE6c0h.mount: Deactivated successfully. Sep 6 00:05:45.463303 systemd-logind[1837]: Removed session 19. Sep 6 00:05:45.588000 audit[6085]: USER_ACCT pid=6085 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:45.588791 sshd[6085]: Accepted publickey for core from 147.75.109.163 port 44868 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:05:45.590000 audit[6085]: CRED_ACQ pid=6085 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:45.591000 audit[6085]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff9c6cb60 a2=3 a3=1 items=0 ppid=1 pid=6085 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:45.591000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:05:45.592231 sshd[6085]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:05:45.605277 systemd[1]: Started session-20.scope. Sep 6 00:05:45.607591 systemd-logind[1837]: New session 20 of user core. Sep 6 00:05:45.627000 audit[6085]: USER_START pid=6085 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:45.631000 audit[6111]: CRED_ACQ pid=6111 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:46.058372 sshd[6085]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:46.060000 audit[6085]: USER_END pid=6085 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:46.060000 audit[6085]: CRED_DISP pid=6085 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:46.064502 systemd-logind[1837]: Session 20 logged out. Waiting for processes to exit. Sep 6 00:05:46.066134 systemd[1]: sshd@19-172.31.24.61:22-147.75.109.163:44868.service: Deactivated successfully. Sep 6 00:05:46.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.24.61:22-147.75.109.163:44868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:46.067710 systemd[1]: session-20.scope: Deactivated successfully. Sep 6 00:05:46.069872 systemd-logind[1837]: Removed session 20. Sep 6 00:05:48.690349 systemd[1]: run-containerd-runc-k8s.io-bd8c590e3adb75480b1d8fdfd093d78e6d81f7d307ce449a129f1213933b39b6-runc.egx7Ye.mount: Deactivated successfully. Sep 6 00:05:50.971933 amazon-ssm-agent[1820]: 2025-09-06 00:05:50 INFO [HealthCheck] HealthCheck reporting agent health. Sep 6 00:05:51.089561 systemd[1]: Started sshd@20-172.31.24.61:22-147.75.109.163:48604.service. Sep 6 00:05:51.101851 kernel: kauditd_printk_skb: 27 callbacks suppressed Sep 6 00:05:51.102016 kernel: audit: type=1130 audit(1757117151.089:561): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.24.61:22-147.75.109.163:48604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:51.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.24.61:22-147.75.109.163:48604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:51.296627 sshd[6144]: Accepted publickey for core from 147.75.109.163 port 48604 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:05:51.295000 audit[6144]: USER_ACCT pid=6144 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:51.309114 sshd[6144]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:05:51.307000 audit[6144]: CRED_ACQ pid=6144 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:51.327404 kernel: audit: type=1101 audit(1757117151.295:562): pid=6144 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:51.327570 kernel: audit: type=1103 audit(1757117151.307:563): pid=6144 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:51.319455 systemd[1]: Started session-21.scope. Sep 6 00:05:51.321698 systemd-logind[1837]: New session 21 of user core. Sep 6 00:05:51.343395 kernel: audit: type=1006 audit(1757117151.307:564): pid=6144 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Sep 6 00:05:51.307000 audit[6144]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe63dcec0 a2=3 a3=1 items=0 ppid=1 pid=6144 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:51.355193 kernel: audit: type=1300 audit(1757117151.307:564): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe63dcec0 a2=3 a3=1 items=0 ppid=1 pid=6144 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:51.307000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:05:51.361176 kernel: audit: type=1327 audit(1757117151.307:564): proctitle=737368643A20636F7265205B707269765D Sep 6 00:05:51.360000 audit[6144]: USER_START pid=6144 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:51.375000 audit[6147]: CRED_ACQ pid=6147 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:51.377399 kernel: audit: type=1105 audit(1757117151.360:565): pid=6144 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:51.387363 kernel: audit: type=1103 audit(1757117151.375:566): pid=6147 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:51.663364 sshd[6144]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:51.666000 audit[6144]: USER_END pid=6144 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:51.679000 audit[6144]: CRED_DISP pid=6144 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:51.683297 kernel: audit: type=1106 audit(1757117151.666:567): pid=6144 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:51.684109 systemd[1]: sshd@20-172.31.24.61:22-147.75.109.163:48604.service: Deactivated successfully. Sep 6 00:05:51.696095 systemd-logind[1837]: Session 21 logged out. Waiting for processes to exit. Sep 6 00:05:51.696263 systemd[1]: session-21.scope: Deactivated successfully. Sep 6 00:05:51.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.24.61:22-147.75.109.163:48604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:51.698483 kernel: audit: type=1104 audit(1757117151.679:568): pid=6144 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:51.699321 systemd-logind[1837]: Removed session 21. Sep 6 00:05:53.002000 audit[6157]: NETFILTER_CFG table=filter:134 family=2 entries=20 op=nft_register_rule pid=6157 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:05:53.002000 audit[6157]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=fffff020d750 a2=0 a3=1 items=0 ppid=3026 pid=6157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:53.002000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:05:53.018000 audit[6157]: NETFILTER_CFG table=nat:135 family=2 entries=110 op=nft_register_chain pid=6157 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 00:05:53.018000 audit[6157]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=50988 a0=3 a1=fffff020d750 a2=0 a3=1 items=0 ppid=3026 pid=6157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:53.018000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 00:05:56.688429 systemd[1]: Started sshd@21-172.31.24.61:22-147.75.109.163:48614.service. Sep 6 00:05:56.700688 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 6 00:05:56.700856 kernel: audit: type=1130 audit(1757117156.688:572): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.24.61:22-147.75.109.163:48614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:56.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.24.61:22-147.75.109.163:48614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:05:56.872000 audit[6159]: USER_ACCT pid=6159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:56.876899 sshd[6159]: Accepted publickey for core from 147.75.109.163 port 48614 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:05:56.886265 kernel: audit: type=1101 audit(1757117156.872:573): pid=6159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:56.885000 audit[6159]: CRED_ACQ pid=6159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:56.887493 sshd[6159]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:05:56.903855 kernel: audit: type=1103 audit(1757117156.885:574): pid=6159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:56.903974 kernel: audit: type=1006 audit(1757117156.885:575): pid=6159 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Sep 6 00:05:56.885000 audit[6159]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe9c8c740 a2=3 a3=1 items=0 ppid=1 pid=6159 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:56.915632 kernel: audit: type=1300 audit(1757117156.885:575): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe9c8c740 a2=3 a3=1 items=0 ppid=1 pid=6159 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:05:56.885000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:05:56.919860 kernel: audit: type=1327 audit(1757117156.885:575): proctitle=737368643A20636F7265205B707269765D Sep 6 00:05:56.921854 systemd-logind[1837]: New session 22 of user core. Sep 6 00:05:56.923930 systemd[1]: Started session-22.scope. Sep 6 00:05:56.948000 audit[6159]: USER_START pid=6159 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:56.966374 kernel: audit: type=1105 audit(1757117156.948:576): pid=6159 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:56.963000 audit[6162]: CRED_ACQ pid=6162 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:56.976283 kernel: audit: type=1103 audit(1757117156.963:577): pid=6162 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:57.280158 sshd[6159]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:57.282000 audit[6159]: USER_END pid=6159 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:57.288217 systemd[1]: sshd@21-172.31.24.61:22-147.75.109.163:48614.service: Deactivated successfully. Sep 6 00:05:57.290322 systemd[1]: session-22.scope: Deactivated successfully. Sep 6 00:05:57.296813 systemd-logind[1837]: Session 22 logged out. Waiting for processes to exit. Sep 6 00:05:57.299499 systemd-logind[1837]: Removed session 22. Sep 6 00:05:57.284000 audit[6159]: CRED_DISP pid=6159 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:57.315384 kernel: audit: type=1106 audit(1757117157.282:578): pid=6159 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:57.315574 kernel: audit: type=1104 audit(1757117157.284:579): pid=6159 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:05:57.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.24.61:22-147.75.109.163:48614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:02.308015 systemd[1]: Started sshd@22-172.31.24.61:22-147.75.109.163:59140.service. Sep 6 00:06:02.320624 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 00:06:02.320790 kernel: audit: type=1130 audit(1757117162.308:581): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.24.61:22-147.75.109.163:59140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:02.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.24.61:22-147.75.109.163:59140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:02.492000 audit[6172]: USER_ACCT pid=6172 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:02.493684 sshd[6172]: Accepted publickey for core from 147.75.109.163 port 59140 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:06:02.496903 sshd[6172]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:06:02.492000 audit[6172]: CRED_ACQ pid=6172 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:02.514404 kernel: audit: type=1101 audit(1757117162.492:582): pid=6172 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:02.514595 kernel: audit: type=1103 audit(1757117162.492:583): pid=6172 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:02.520817 kernel: audit: type=1006 audit(1757117162.492:584): pid=6172 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Sep 6 00:06:02.492000 audit[6172]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdb49a620 a2=3 a3=1 items=0 ppid=1 pid=6172 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:06:02.528738 systemd[1]: Started session-23.scope. Sep 6 00:06:02.533369 kernel: audit: type=1300 audit(1757117162.492:584): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdb49a620 a2=3 a3=1 items=0 ppid=1 pid=6172 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:06:02.533524 systemd-logind[1837]: New session 23 of user core. Sep 6 00:06:02.492000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:06:02.542601 kernel: audit: type=1327 audit(1757117162.492:584): proctitle=737368643A20636F7265205B707269765D Sep 6 00:06:02.550000 audit[6172]: USER_START pid=6172 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:02.552000 audit[6175]: CRED_ACQ pid=6175 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:02.578832 kernel: audit: type=1105 audit(1757117162.550:585): pid=6172 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:02.578980 kernel: audit: type=1103 audit(1757117162.552:586): pid=6175 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:02.869860 sshd[6172]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:02.871000 audit[6172]: USER_END pid=6172 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:02.875752 systemd[1]: sshd@22-172.31.24.61:22-147.75.109.163:59140.service: Deactivated successfully. Sep 6 00:06:02.877287 systemd[1]: session-23.scope: Deactivated successfully. Sep 6 00:06:02.885522 systemd-logind[1837]: Session 23 logged out. Waiting for processes to exit. Sep 6 00:06:02.887161 systemd-logind[1837]: Removed session 23. Sep 6 00:06:02.871000 audit[6172]: CRED_DISP pid=6172 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:02.904803 kernel: audit: type=1106 audit(1757117162.871:587): pid=6172 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:02.904976 kernel: audit: type=1104 audit(1757117162.871:588): pid=6172 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:02.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.24.61:22-147.75.109.163:59140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:07.403081 env[1848]: time="2025-09-06T00:06:07.403017458Z" level=info msg="StopPodSandbox for \"809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975\"" Sep 6 00:06:07.593951 env[1848]: 2025-09-06 00:06:07.480 [WARNING][6194] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--hw6d9-eth0", GenerateName:"calico-apiserver-797f9c6c85-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c9fcc46-cf36-4206-bdd4-bb1b1b8568f2", ResourceVersion:"1278", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 4, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"797f9c6c85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-61", ContainerID:"48dc23c262ce56081ef0eb0d8b65c1734f9b65c6f0f104c00de819bcab7fac62", Pod:"calico-apiserver-797f9c6c85-hw6d9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie9f7ab0e601", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:06:07.593951 env[1848]: 2025-09-06 00:06:07.480 [INFO][6194] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" Sep 6 00:06:07.593951 env[1848]: 2025-09-06 00:06:07.481 [INFO][6194] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" iface="eth0" netns="" Sep 6 00:06:07.593951 env[1848]: 2025-09-06 00:06:07.481 [INFO][6194] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" Sep 6 00:06:07.593951 env[1848]: 2025-09-06 00:06:07.481 [INFO][6194] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" Sep 6 00:06:07.593951 env[1848]: 2025-09-06 00:06:07.568 [INFO][6201] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" HandleID="k8s-pod-network.809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" Workload="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--hw6d9-eth0" Sep 6 00:06:07.593951 env[1848]: 2025-09-06 00:06:07.568 [INFO][6201] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:06:07.593951 env[1848]: 2025-09-06 00:06:07.568 [INFO][6201] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:06:07.593951 env[1848]: 2025-09-06 00:06:07.582 [WARNING][6201] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" HandleID="k8s-pod-network.809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" Workload="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--hw6d9-eth0" Sep 6 00:06:07.593951 env[1848]: 2025-09-06 00:06:07.582 [INFO][6201] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" HandleID="k8s-pod-network.809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" Workload="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--hw6d9-eth0" Sep 6 00:06:07.593951 env[1848]: 2025-09-06 00:06:07.585 [INFO][6201] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:06:07.593951 env[1848]: 2025-09-06 00:06:07.588 [INFO][6194] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" Sep 6 00:06:07.595143 env[1848]: time="2025-09-06T00:06:07.595081609Z" level=info msg="TearDown network for sandbox \"809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975\" successfully" Sep 6 00:06:07.595346 env[1848]: time="2025-09-06T00:06:07.595309947Z" level=info msg="StopPodSandbox for \"809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975\" returns successfully" Sep 6 00:06:07.596175 env[1848]: time="2025-09-06T00:06:07.596126559Z" level=info msg="RemovePodSandbox for \"809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975\"" Sep 6 00:06:07.596948 env[1848]: time="2025-09-06T00:06:07.596871383Z" level=info msg="Forcibly stopping sandbox \"809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975\"" Sep 6 00:06:07.785867 env[1848]: 2025-09-06 00:06:07.695 [WARNING][6218] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--hw6d9-eth0", GenerateName:"calico-apiserver-797f9c6c85-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c9fcc46-cf36-4206-bdd4-bb1b1b8568f2", ResourceVersion:"1278", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 4, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"797f9c6c85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-61", ContainerID:"48dc23c262ce56081ef0eb0d8b65c1734f9b65c6f0f104c00de819bcab7fac62", Pod:"calico-apiserver-797f9c6c85-hw6d9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie9f7ab0e601", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:06:07.785867 env[1848]: 2025-09-06 00:06:07.696 [INFO][6218] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" Sep 6 00:06:07.785867 env[1848]: 2025-09-06 00:06:07.696 [INFO][6218] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" iface="eth0" netns="" Sep 6 00:06:07.785867 env[1848]: 2025-09-06 00:06:07.696 [INFO][6218] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" Sep 6 00:06:07.785867 env[1848]: 2025-09-06 00:06:07.696 [INFO][6218] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" Sep 6 00:06:07.785867 env[1848]: 2025-09-06 00:06:07.756 [INFO][6225] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" HandleID="k8s-pod-network.809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" Workload="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--hw6d9-eth0" Sep 6 00:06:07.785867 env[1848]: 2025-09-06 00:06:07.757 [INFO][6225] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:06:07.785867 env[1848]: 2025-09-06 00:06:07.757 [INFO][6225] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:06:07.785867 env[1848]: 2025-09-06 00:06:07.774 [WARNING][6225] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" HandleID="k8s-pod-network.809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" Workload="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--hw6d9-eth0" Sep 6 00:06:07.785867 env[1848]: 2025-09-06 00:06:07.774 [INFO][6225] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" HandleID="k8s-pod-network.809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" Workload="ip--172--31--24--61-k8s-calico--apiserver--797f9c6c85--hw6d9-eth0" Sep 6 00:06:07.785867 env[1848]: 2025-09-06 00:06:07.777 [INFO][6225] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:06:07.785867 env[1848]: 2025-09-06 00:06:07.781 [INFO][6218] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975" Sep 6 00:06:07.786869 env[1848]: time="2025-09-06T00:06:07.785911080Z" level=info msg="TearDown network for sandbox \"809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975\" successfully" Sep 6 00:06:07.793208 env[1848]: time="2025-09-06T00:06:07.793079522Z" level=info msg="RemovePodSandbox \"809a95cd065091f77f0f8293ae92df86d6072aae89e940dec8d79a9ed0c53975\" returns successfully" Sep 6 00:06:07.794556 env[1848]: time="2025-09-06T00:06:07.794494621Z" level=info msg="StopPodSandbox for \"84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb\"" Sep 6 00:06:07.908471 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 00:06:07.908602 kernel: audit: type=1130 audit(1757117167.897:590): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.24.61:22-147.75.109.163:59142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:07.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.24.61:22-147.75.109.163:59142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:07.896961 systemd[1]: Started sshd@23-172.31.24.61:22-147.75.109.163:59142.service. Sep 6 00:06:07.988844 env[1848]: 2025-09-06 00:06:07.868 [WARNING][6239] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--61-k8s-calico--kube--controllers--764999b789--c2t8l-eth0", GenerateName:"calico-kube-controllers-764999b789-", Namespace:"calico-system", SelfLink:"", UID:"92c99ea2-eb5a-48a9-9626-85e265ce8b17", ResourceVersion:"1184", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 4, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"764999b789", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-61", ContainerID:"eb7a032dbd814d4baf823b59cd5d62efd8ad669504c619c7ca7cf8f91ed031e5", Pod:"calico-kube-controllers-764999b789-c2t8l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie29303d78ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:06:07.988844 env[1848]: 2025-09-06 00:06:07.869 [INFO][6239] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" Sep 6 00:06:07.988844 env[1848]: 2025-09-06 00:06:07.870 [INFO][6239] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" iface="eth0" netns="" Sep 6 00:06:07.988844 env[1848]: 2025-09-06 00:06:07.870 [INFO][6239] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" Sep 6 00:06:07.988844 env[1848]: 2025-09-06 00:06:07.870 [INFO][6239] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" Sep 6 00:06:07.988844 env[1848]: 2025-09-06 00:06:07.957 [INFO][6246] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" HandleID="k8s-pod-network.84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" Workload="ip--172--31--24--61-k8s-calico--kube--controllers--764999b789--c2t8l-eth0" Sep 6 00:06:07.988844 env[1848]: 2025-09-06 00:06:07.958 [INFO][6246] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:06:07.988844 env[1848]: 2025-09-06 00:06:07.958 [INFO][6246] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:06:07.988844 env[1848]: 2025-09-06 00:06:07.974 [WARNING][6246] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" HandleID="k8s-pod-network.84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" Workload="ip--172--31--24--61-k8s-calico--kube--controllers--764999b789--c2t8l-eth0" Sep 6 00:06:07.988844 env[1848]: 2025-09-06 00:06:07.974 [INFO][6246] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" HandleID="k8s-pod-network.84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" Workload="ip--172--31--24--61-k8s-calico--kube--controllers--764999b789--c2t8l-eth0" Sep 6 00:06:07.988844 env[1848]: 2025-09-06 00:06:07.976 [INFO][6246] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:06:07.988844 env[1848]: 2025-09-06 00:06:07.981 [INFO][6239] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" Sep 6 00:06:07.988844 env[1848]: time="2025-09-06T00:06:07.984794945Z" level=info msg="TearDown network for sandbox \"84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb\" successfully" Sep 6 00:06:07.988844 env[1848]: time="2025-09-06T00:06:07.984840571Z" level=info msg="StopPodSandbox for \"84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb\" returns successfully" Sep 6 00:06:07.992625 env[1848]: time="2025-09-06T00:06:07.992556714Z" level=info msg="RemovePodSandbox for \"84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb\"" Sep 6 00:06:07.992823 env[1848]: time="2025-09-06T00:06:07.992626306Z" level=info msg="Forcibly stopping sandbox \"84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb\"" Sep 6 00:06:08.107000 audit[6250]: USER_ACCT pid=6250 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:08.122567 sshd[6250]: Accepted publickey for core from 147.75.109.163 port 59142 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:06:08.124516 sshd[6250]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:06:08.123000 audit[6250]: CRED_ACQ pid=6250 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:08.144415 kernel: audit: type=1101 audit(1757117168.107:591): pid=6250 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:08.144599 kernel: audit: type=1103 audit(1757117168.123:592): pid=6250 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:08.148309 systemd-logind[1837]: New session 24 of user core. Sep 6 00:06:08.151330 systemd[1]: Started session-24.scope. Sep 6 00:06:08.166271 kernel: audit: type=1006 audit(1757117168.123:593): pid=6250 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Sep 6 00:06:08.123000 audit[6250]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff5049ce0 a2=3 a3=1 items=0 ppid=1 pid=6250 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:06:08.188234 kernel: audit: type=1300 audit(1757117168.123:593): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff5049ce0 a2=3 a3=1 items=0 ppid=1 pid=6250 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:06:08.193332 kernel: audit: type=1327 audit(1757117168.123:593): proctitle=737368643A20636F7265205B707269765D Sep 6 00:06:08.123000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:06:08.212000 audit[6250]: USER_START pid=6250 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:08.243750 kernel: audit: type=1105 audit(1757117168.212:594): pid=6250 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:08.243918 kernel: audit: type=1103 audit(1757117168.226:595): pid=6276 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:08.226000 audit[6276]: CRED_ACQ pid=6276 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:08.445729 env[1848]: 2025-09-06 00:06:08.149 [WARNING][6263] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--61-k8s-calico--kube--controllers--764999b789--c2t8l-eth0", GenerateName:"calico-kube-controllers-764999b789-", Namespace:"calico-system", SelfLink:"", UID:"92c99ea2-eb5a-48a9-9626-85e265ce8b17", ResourceVersion:"1184", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 4, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"764999b789", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-61", ContainerID:"eb7a032dbd814d4baf823b59cd5d62efd8ad669504c619c7ca7cf8f91ed031e5", Pod:"calico-kube-controllers-764999b789-c2t8l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie29303d78ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:06:08.445729 env[1848]: 2025-09-06 00:06:08.150 [INFO][6263] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" Sep 6 00:06:08.445729 env[1848]: 2025-09-06 00:06:08.151 [INFO][6263] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" iface="eth0" netns="" Sep 6 00:06:08.445729 env[1848]: 2025-09-06 00:06:08.151 [INFO][6263] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" Sep 6 00:06:08.445729 env[1848]: 2025-09-06 00:06:08.151 [INFO][6263] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" Sep 6 00:06:08.445729 env[1848]: 2025-09-06 00:06:08.392 [INFO][6271] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" HandleID="k8s-pod-network.84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" Workload="ip--172--31--24--61-k8s-calico--kube--controllers--764999b789--c2t8l-eth0" Sep 6 00:06:08.445729 env[1848]: 2025-09-06 00:06:08.392 [INFO][6271] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:06:08.445729 env[1848]: 2025-09-06 00:06:08.392 [INFO][6271] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:06:08.445729 env[1848]: 2025-09-06 00:06:08.414 [WARNING][6271] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" HandleID="k8s-pod-network.84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" Workload="ip--172--31--24--61-k8s-calico--kube--controllers--764999b789--c2t8l-eth0" Sep 6 00:06:08.445729 env[1848]: 2025-09-06 00:06:08.414 [INFO][6271] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" HandleID="k8s-pod-network.84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" Workload="ip--172--31--24--61-k8s-calico--kube--controllers--764999b789--c2t8l-eth0" Sep 6 00:06:08.445729 env[1848]: 2025-09-06 00:06:08.435 [INFO][6271] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:06:08.445729 env[1848]: 2025-09-06 00:06:08.438 [INFO][6263] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb" Sep 6 00:06:08.447233 env[1848]: time="2025-09-06T00:06:08.447156624Z" level=info msg="TearDown network for sandbox \"84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb\" successfully" Sep 6 00:06:08.461206 env[1848]: time="2025-09-06T00:06:08.461113298Z" level=info msg="RemovePodSandbox \"84fbbdca6fe198412c6b61f7c67d99f7960721030611019b9b782c89c0d21aeb\" returns successfully" Sep 6 00:06:08.462245 env[1848]: time="2025-09-06T00:06:08.462154996Z" level=info msg="StopPodSandbox for \"ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b\"" Sep 6 00:06:08.737646 sshd[6250]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:08.747000 audit[6250]: USER_END pid=6250 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:08.765376 systemd[1]: sshd@23-172.31.24.61:22-147.75.109.163:59142.service: Deactivated successfully. Sep 6 00:06:08.765894 systemd-logind[1837]: Session 24 logged out. Waiting for processes to exit. Sep 6 00:06:08.768043 systemd[1]: session-24.scope: Deactivated successfully. Sep 6 00:06:08.770863 systemd-logind[1837]: Removed session 24. Sep 6 00:06:08.748000 audit[6250]: CRED_DISP pid=6250 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:08.784918 kernel: audit: type=1106 audit(1757117168.747:596): pid=6250 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:08.785041 kernel: audit: type=1104 audit(1757117168.748:597): pid=6250 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:08.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.24.61:22-147.75.109.163:59142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:08.863385 env[1848]: 2025-09-06 00:06:08.633 [WARNING][6306] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--61-k8s-csi--node--driver--zpqph-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8f59dc9d-44f6-4633-8546-11f2219b7da2", ResourceVersion:"1193", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 4, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-61", ContainerID:"6cee43d7be3a590307a93668c8808b67c59fa611300d2d84e6230853b9e04207", Pod:"csi-node-driver-zpqph", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.40.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0fb53521b12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:06:08.863385 env[1848]: 2025-09-06 00:06:08.633 [INFO][6306] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" Sep 6 00:06:08.863385 env[1848]: 2025-09-06 00:06:08.633 [INFO][6306] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" iface="eth0" netns="" Sep 6 00:06:08.863385 env[1848]: 2025-09-06 00:06:08.633 [INFO][6306] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" Sep 6 00:06:08.863385 env[1848]: 2025-09-06 00:06:08.634 [INFO][6306] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" Sep 6 00:06:08.863385 env[1848]: 2025-09-06 00:06:08.740 [INFO][6314] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" HandleID="k8s-pod-network.ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" Workload="ip--172--31--24--61-k8s-csi--node--driver--zpqph-eth0" Sep 6 00:06:08.863385 env[1848]: 2025-09-06 00:06:08.740 [INFO][6314] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:06:08.863385 env[1848]: 2025-09-06 00:06:08.740 [INFO][6314] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:06:08.863385 env[1848]: 2025-09-06 00:06:08.850 [WARNING][6314] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" HandleID="k8s-pod-network.ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" Workload="ip--172--31--24--61-k8s-csi--node--driver--zpqph-eth0" Sep 6 00:06:08.863385 env[1848]: 2025-09-06 00:06:08.850 [INFO][6314] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" HandleID="k8s-pod-network.ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" Workload="ip--172--31--24--61-k8s-csi--node--driver--zpqph-eth0" Sep 6 00:06:08.863385 env[1848]: 2025-09-06 00:06:08.856 [INFO][6314] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:06:08.863385 env[1848]: 2025-09-06 00:06:08.859 [INFO][6306] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" Sep 6 00:06:08.864460 env[1848]: time="2025-09-06T00:06:08.864400238Z" level=info msg="TearDown network for sandbox \"ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b\" successfully" Sep 6 00:06:08.864613 env[1848]: time="2025-09-06T00:06:08.864577512Z" level=info msg="StopPodSandbox for \"ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b\" returns successfully" Sep 6 00:06:08.865514 env[1848]: time="2025-09-06T00:06:08.865458641Z" level=info msg="RemovePodSandbox for \"ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b\"" Sep 6 00:06:08.865756 env[1848]: time="2025-09-06T00:06:08.865690483Z" level=info msg="Forcibly stopping sandbox \"ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b\"" Sep 6 00:06:09.083294 env[1848]: 2025-09-06 00:06:08.969 [WARNING][6340] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--61-k8s-csi--node--driver--zpqph-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8f59dc9d-44f6-4633-8546-11f2219b7da2", ResourceVersion:"1193", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 0, 4, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-61", ContainerID:"6cee43d7be3a590307a93668c8808b67c59fa611300d2d84e6230853b9e04207", Pod:"csi-node-driver-zpqph", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.40.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0fb53521b12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 00:06:09.083294 env[1848]: 2025-09-06 00:06:08.969 [INFO][6340] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" Sep 6 00:06:09.083294 env[1848]: 2025-09-06 00:06:08.970 [INFO][6340] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" iface="eth0" netns="" Sep 6 00:06:09.083294 env[1848]: 2025-09-06 00:06:08.970 [INFO][6340] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" Sep 6 00:06:09.083294 env[1848]: 2025-09-06 00:06:08.970 [INFO][6340] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" Sep 6 00:06:09.083294 env[1848]: 2025-09-06 00:06:09.036 [INFO][6347] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" HandleID="k8s-pod-network.ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" Workload="ip--172--31--24--61-k8s-csi--node--driver--zpqph-eth0" Sep 6 00:06:09.083294 env[1848]: 2025-09-06 00:06:09.040 [INFO][6347] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 00:06:09.083294 env[1848]: 2025-09-06 00:06:09.040 [INFO][6347] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 00:06:09.083294 env[1848]: 2025-09-06 00:06:09.072 [WARNING][6347] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" HandleID="k8s-pod-network.ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" Workload="ip--172--31--24--61-k8s-csi--node--driver--zpqph-eth0" Sep 6 00:06:09.083294 env[1848]: 2025-09-06 00:06:09.072 [INFO][6347] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" HandleID="k8s-pod-network.ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" Workload="ip--172--31--24--61-k8s-csi--node--driver--zpqph-eth0" Sep 6 00:06:09.083294 env[1848]: 2025-09-06 00:06:09.075 [INFO][6347] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 00:06:09.083294 env[1848]: 2025-09-06 00:06:09.077 [INFO][6340] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b" Sep 6 00:06:09.084488 env[1848]: time="2025-09-06T00:06:09.084424615Z" level=info msg="TearDown network for sandbox \"ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b\" successfully" Sep 6 00:06:09.093615 env[1848]: time="2025-09-06T00:06:09.093467445Z" level=info msg="RemovePodSandbox \"ec24d837704d9dce863280dbe84fb6a4e6a9c2fe689a7040367270843860251b\" returns successfully" Sep 6 00:06:13.777279 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 00:06:13.777468 kernel: audit: type=1130 audit(1757117173.763:599): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.24.61:22-147.75.109.163:58862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:13.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.24.61:22-147.75.109.163:58862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:13.764116 systemd[1]: Started sshd@24-172.31.24.61:22-147.75.109.163:58862.service. Sep 6 00:06:13.958000 audit[6355]: USER_ACCT pid=6355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:13.959861 sshd[6355]: Accepted publickey for core from 147.75.109.163 port 58862 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:06:13.972914 sshd[6355]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:06:13.981894 systemd-logind[1837]: New session 25 of user core. Sep 6 00:06:13.984673 systemd[1]: Started session-25.scope. Sep 6 00:06:13.970000 audit[6355]: CRED_ACQ pid=6355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:13.995423 kernel: audit: type=1101 audit(1757117173.958:600): pid=6355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:13.995578 kernel: audit: type=1103 audit(1757117173.970:601): pid=6355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:14.020240 kernel: audit: type=1006 audit(1757117173.970:602): pid=6355 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Sep 6 00:06:13.970000 audit[6355]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff99ca2d0 a2=3 a3=1 items=0 ppid=1 pid=6355 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:06:14.039394 kernel: audit: type=1300 audit(1757117173.970:602): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff99ca2d0 a2=3 a3=1 items=0 ppid=1 pid=6355 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:06:13.970000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:06:14.058721 kernel: audit: type=1327 audit(1757117173.970:602): proctitle=737368643A20636F7265205B707269765D Sep 6 00:06:14.006000 audit[6355]: USER_START pid=6355 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:14.074014 kernel: audit: type=1105 audit(1757117174.006:603): pid=6355 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:14.009000 audit[6358]: CRED_ACQ pid=6358 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:14.094587 kernel: audit: type=1103 audit(1757117174.009:604): pid=6358 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:14.354628 sshd[6355]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:14.356000 audit[6355]: USER_END pid=6355 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:14.370659 systemd[1]: sshd@24-172.31.24.61:22-147.75.109.163:58862.service: Deactivated successfully. Sep 6 00:06:14.356000 audit[6355]: CRED_DISP pid=6355 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:14.383850 kernel: audit: type=1106 audit(1757117174.356:605): pid=6355 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:14.383972 kernel: audit: type=1104 audit(1757117174.356:606): pid=6355 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:14.384331 systemd[1]: session-25.scope: Deactivated successfully. Sep 6 00:06:14.386553 systemd-logind[1837]: Session 25 logged out. Waiting for processes to exit. Sep 6 00:06:14.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.24.61:22-147.75.109.163:58862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:14.390699 systemd-logind[1837]: Removed session 25. Sep 6 00:06:15.375806 systemd[1]: run-containerd-runc-k8s.io-3ef2335b73315b9a545c9095fbad060acfac8f324bb91b5132784cdf07c8300f-runc.H6vyHF.mount: Deactivated successfully. Sep 6 00:06:19.381348 systemd[1]: Started sshd@25-172.31.24.61:22-147.75.109.163:58878.service. Sep 6 00:06:19.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.24.61:22-147.75.109.163:58878 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:19.384203 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 00:06:19.384349 kernel: audit: type=1130 audit(1757117179.380:608): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.24.61:22-147.75.109.163:58878 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:19.578442 sshd[6405]: Accepted publickey for core from 147.75.109.163 port 58878 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:06:19.577000 audit[6405]: USER_ACCT pid=6405 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:19.590371 sshd[6405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:06:19.601588 systemd-logind[1837]: New session 26 of user core. Sep 6 00:06:19.604154 systemd[1]: Started session-26.scope. Sep 6 00:06:19.588000 audit[6405]: CRED_ACQ pid=6405 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:19.621095 kernel: audit: type=1101 audit(1757117179.577:609): pid=6405 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:19.621309 kernel: audit: type=1103 audit(1757117179.588:610): pid=6405 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:19.651945 kernel: audit: type=1006 audit(1757117179.588:611): pid=6405 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Sep 6 00:06:19.588000 audit[6405]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd8c49810 a2=3 a3=1 items=0 ppid=1 pid=6405 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:06:19.663002 kernel: audit: type=1300 audit(1757117179.588:611): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd8c49810 a2=3 a3=1 items=0 ppid=1 pid=6405 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:06:19.588000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:06:19.667049 kernel: audit: type=1327 audit(1757117179.588:611): proctitle=737368643A20636F7265205B707269765D Sep 6 00:06:19.634000 audit[6405]: USER_START pid=6405 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:19.679145 kernel: audit: type=1105 audit(1757117179.634:612): pid=6405 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:19.637000 audit[6408]: CRED_ACQ pid=6408 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:19.689175 kernel: audit: type=1103 audit(1757117179.637:613): pid=6408 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:19.956779 sshd[6405]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:19.957000 audit[6405]: USER_END pid=6405 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:19.957000 audit[6405]: CRED_DISP pid=6405 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:19.981561 kernel: audit: type=1106 audit(1757117179.957:614): pid=6405 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:19.981699 kernel: audit: type=1104 audit(1757117179.957:615): pid=6405 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:19.973565 systemd[1]: sshd@25-172.31.24.61:22-147.75.109.163:58878.service: Deactivated successfully. Sep 6 00:06:19.975118 systemd[1]: session-26.scope: Deactivated successfully. Sep 6 00:06:19.982507 systemd-logind[1837]: Session 26 logged out. Waiting for processes to exit. Sep 6 00:06:19.985502 systemd-logind[1837]: Removed session 26. Sep 6 00:06:19.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.24.61:22-147.75.109.163:58878 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:24.998202 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 00:06:24.998347 kernel: audit: type=1130 audit(1757117184.986:617): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.24.61:22-147.75.109.163:37672 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:24.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.24.61:22-147.75.109.163:37672 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:24.985857 systemd[1]: Started sshd@26-172.31.24.61:22-147.75.109.163:37672.service. Sep 6 00:06:25.175469 sshd[6424]: Accepted publickey for core from 147.75.109.163 port 37672 ssh2: RSA SHA256:CT8P9x8s4J0T70k8+LLVTP4XjE3e1SNW15vyou+QijI Sep 6 00:06:25.175000 audit[6424]: USER_ACCT pid=6424 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:25.189525 sshd[6424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:06:25.188000 audit[6424]: CRED_ACQ pid=6424 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:25.199917 kernel: audit: type=1101 audit(1757117185.175:618): pid=6424 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:25.200127 kernel: audit: type=1103 audit(1757117185.188:619): pid=6424 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:25.206636 kernel: audit: type=1006 audit(1757117185.188:620): pid=6424 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Sep 6 00:06:25.216530 systemd-logind[1837]: New session 27 of user core. Sep 6 00:06:25.188000 audit[6424]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd26b34d0 a2=3 a3=1 items=0 ppid=1 pid=6424 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:06:25.220312 systemd[1]: Started session-27.scope. Sep 6 00:06:25.227625 kernel: audit: type=1300 audit(1757117185.188:620): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd26b34d0 a2=3 a3=1 items=0 ppid=1 pid=6424 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:06:25.188000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 00:06:25.235465 kernel: audit: type=1327 audit(1757117185.188:620): proctitle=737368643A20636F7265205B707269765D Sep 6 00:06:25.248000 audit[6424]: USER_START pid=6424 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:25.251000 audit[6429]: CRED_ACQ pid=6429 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:25.280665 kernel: audit: type=1105 audit(1757117185.248:621): pid=6424 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:25.280828 kernel: audit: type=1103 audit(1757117185.251:622): pid=6429 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:25.552334 sshd[6424]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:25.554000 audit[6424]: USER_END pid=6424 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:25.559726 systemd-logind[1837]: Session 27 logged out. Waiting for processes to exit. Sep 6 00:06:25.561678 systemd[1]: sshd@26-172.31.24.61:22-147.75.109.163:37672.service: Deactivated successfully. Sep 6 00:06:25.563107 systemd[1]: session-27.scope: Deactivated successfully. Sep 6 00:06:25.565329 systemd-logind[1837]: Removed session 27. Sep 6 00:06:25.556000 audit[6424]: CRED_DISP pid=6424 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:25.579095 kernel: audit: type=1106 audit(1757117185.554:623): pid=6424 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:25.579316 kernel: audit: type=1104 audit(1757117185.556:624): pid=6424 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Sep 6 00:06:25.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.24.61:22-147.75.109.163:37672 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:39.251994 env[1848]: time="2025-09-06T00:06:39.251913983Z" level=info msg="shim disconnected" id=c83364e7f01497ed0cb0db2086ad11f33d93d518ad01cd1c6209ec7f90815296 Sep 6 00:06:39.252811 env[1848]: time="2025-09-06T00:06:39.251996405Z" level=warning msg="cleaning up after shim disconnected" id=c83364e7f01497ed0cb0db2086ad11f33d93d518ad01cd1c6209ec7f90815296 namespace=k8s.io Sep 6 00:06:39.252811 env[1848]: time="2025-09-06T00:06:39.252019902Z" level=info msg="cleaning up dead shim" Sep 6 00:06:39.252346 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c83364e7f01497ed0cb0db2086ad11f33d93d518ad01cd1c6209ec7f90815296-rootfs.mount: Deactivated successfully. Sep 6 00:06:39.270750 env[1848]: time="2025-09-06T00:06:39.270680510Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:06:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6514 runtime=io.containerd.runc.v2\n" Sep 6 00:06:39.357664 kubelet[2926]: I0906 00:06:39.356554 2926 scope.go:117] "RemoveContainer" containerID="c83364e7f01497ed0cb0db2086ad11f33d93d518ad01cd1c6209ec7f90815296" Sep 6 00:06:39.360041 env[1848]: time="2025-09-06T00:06:39.359990357Z" level=info msg="CreateContainer within sandbox \"59b240fab4566e63d90825624e89161dc75199c517b9827e2129f1a7ccce4559\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 6 00:06:39.388273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1959416644.mount: Deactivated successfully. Sep 6 00:06:39.402063 env[1848]: time="2025-09-06T00:06:39.401979756Z" level=info msg="CreateContainer within sandbox \"59b240fab4566e63d90825624e89161dc75199c517b9827e2129f1a7ccce4559\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"fac4434e4ba7b224f1fe4bad4ff70c9613eb32a8e54a01618d2bdcacb58c804f\"" Sep 6 00:06:39.403003 env[1848]: time="2025-09-06T00:06:39.402933450Z" level=info msg="StartContainer for \"fac4434e4ba7b224f1fe4bad4ff70c9613eb32a8e54a01618d2bdcacb58c804f\"" Sep 6 00:06:39.522571 env[1848]: time="2025-09-06T00:06:39.522431191Z" level=info msg="StartContainer for \"fac4434e4ba7b224f1fe4bad4ff70c9613eb32a8e54a01618d2bdcacb58c804f\" returns successfully" Sep 6 00:06:40.458235 env[1848]: time="2025-09-06T00:06:40.458125292Z" level=info msg="shim disconnected" id=66e244cc7ea3199b67bc9659a4fa4bd1422c4f6cb6290fb7617f0c3d7f3ab8b6 Sep 6 00:06:40.459099 env[1848]: time="2025-09-06T00:06:40.459045168Z" level=warning msg="cleaning up after shim disconnected" id=66e244cc7ea3199b67bc9659a4fa4bd1422c4f6cb6290fb7617f0c3d7f3ab8b6 namespace=k8s.io Sep 6 00:06:40.459289 env[1848]: time="2025-09-06T00:06:40.459258519Z" level=info msg="cleaning up dead shim" Sep 6 00:06:40.460610 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66e244cc7ea3199b67bc9659a4fa4bd1422c4f6cb6290fb7617f0c3d7f3ab8b6-rootfs.mount: Deactivated successfully. Sep 6 00:06:40.478685 env[1848]: time="2025-09-06T00:06:40.478629773Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:06:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6578 runtime=io.containerd.runc.v2\n" Sep 6 00:06:41.367528 kubelet[2926]: I0906 00:06:41.367467 2926 scope.go:117] "RemoveContainer" containerID="66e244cc7ea3199b67bc9659a4fa4bd1422c4f6cb6290fb7617f0c3d7f3ab8b6" Sep 6 00:06:41.370696 env[1848]: time="2025-09-06T00:06:41.370626526Z" level=info msg="CreateContainer within sandbox \"d6bd0ef6a1977ed847d15b7fab33bdeefbc01e877bdc8cde6ac0a2b043ded691\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 6 00:06:41.398599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1433142361.mount: Deactivated successfully. Sep 6 00:06:41.419130 env[1848]: time="2025-09-06T00:06:41.419068855Z" level=info msg="CreateContainer within sandbox \"d6bd0ef6a1977ed847d15b7fab33bdeefbc01e877bdc8cde6ac0a2b043ded691\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2377a88ab1d9e58bf2794080fdd6a6cc4aee1313fedd28e6e337cf762fb56159\"" Sep 6 00:06:41.419981 env[1848]: time="2025-09-06T00:06:41.419931607Z" level=info msg="StartContainer for \"2377a88ab1d9e58bf2794080fdd6a6cc4aee1313fedd28e6e337cf762fb56159\"" Sep 6 00:06:41.569071 env[1848]: time="2025-09-06T00:06:41.568978266Z" level=info msg="StartContainer for \"2377a88ab1d9e58bf2794080fdd6a6cc4aee1313fedd28e6e337cf762fb56159\" returns successfully" Sep 6 00:06:45.387705 systemd[1]: run-containerd-runc-k8s.io-3ef2335b73315b9a545c9095fbad060acfac8f324bb91b5132784cdf07c8300f-runc.AyC1vO.mount: Deactivated successfully. Sep 6 00:06:46.075887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70f344c0e2f97fa9ea58cd65f13ec6d239a669a0f92e8e44243fe97a3888c050-rootfs.mount: Deactivated successfully. Sep 6 00:06:46.077618 env[1848]: time="2025-09-06T00:06:46.077322265Z" level=info msg="shim disconnected" id=70f344c0e2f97fa9ea58cd65f13ec6d239a669a0f92e8e44243fe97a3888c050 Sep 6 00:06:46.078351 env[1848]: time="2025-09-06T00:06:46.078295366Z" level=warning msg="cleaning up after shim disconnected" id=70f344c0e2f97fa9ea58cd65f13ec6d239a669a0f92e8e44243fe97a3888c050 namespace=k8s.io Sep 6 00:06:46.078589 env[1848]: time="2025-09-06T00:06:46.078557152Z" level=info msg="cleaning up dead shim" Sep 6 00:06:46.092397 env[1848]: time="2025-09-06T00:06:46.092338693Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:06:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6678 runtime=io.containerd.runc.v2\n" Sep 6 00:06:46.396788 kubelet[2926]: I0906 00:06:46.396569 2926 scope.go:117] "RemoveContainer" containerID="70f344c0e2f97fa9ea58cd65f13ec6d239a669a0f92e8e44243fe97a3888c050" Sep 6 00:06:46.400233 env[1848]: time="2025-09-06T00:06:46.400153894Z" level=info msg="CreateContainer within sandbox \"4f10875231056233c597eeb1e4d50be62d9fbf4f78d484d76b2067b8a5e1d4e3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 6 00:06:46.434064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2234777718.mount: Deactivated successfully. Sep 6 00:06:46.449572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount532272366.mount: Deactivated successfully. Sep 6 00:06:46.456573 env[1848]: time="2025-09-06T00:06:46.456418996Z" level=info msg="CreateContainer within sandbox \"4f10875231056233c597eeb1e4d50be62d9fbf4f78d484d76b2067b8a5e1d4e3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d5d84b8698314bc1cd35c9243b1e16e7f2fd70fb8ad526ff2cc86f64f994af2a\"" Sep 6 00:06:46.460917 env[1848]: time="2025-09-06T00:06:46.460864809Z" level=info msg="StartContainer for \"d5d84b8698314bc1cd35c9243b1e16e7f2fd70fb8ad526ff2cc86f64f994af2a\"" Sep 6 00:06:46.624493 env[1848]: time="2025-09-06T00:06:46.622791271Z" level=info msg="StartContainer for \"d5d84b8698314bc1cd35c9243b1e16e7f2fd70fb8ad526ff2cc86f64f994af2a\" returns successfully" Sep 6 00:06:47.418657 systemd[1]: run-containerd-runc-k8s.io-d5d84b8698314bc1cd35c9243b1e16e7f2fd70fb8ad526ff2cc86f64f994af2a-runc.6AqLsL.mount: Deactivated successfully. Sep 6 00:06:47.591042 kubelet[2926]: E0906 00:06:47.590989 2926 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-61?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 6 00:06:48.687153 systemd[1]: run-containerd-runc-k8s.io-bd8c590e3adb75480b1d8fdfd093d78e6d81f7d307ce449a129f1213933b39b6-runc.2mhxkW.mount: Deactivated successfully. Sep 6 00:06:50.933672 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fac4434e4ba7b224f1fe4bad4ff70c9613eb32a8e54a01618d2bdcacb58c804f-rootfs.mount: Deactivated successfully. Sep 6 00:06:50.947612 env[1848]: time="2025-09-06T00:06:50.947545595Z" level=info msg="shim disconnected" id=fac4434e4ba7b224f1fe4bad4ff70c9613eb32a8e54a01618d2bdcacb58c804f Sep 6 00:06:50.948521 env[1848]: time="2025-09-06T00:06:50.948484890Z" level=warning msg="cleaning up after shim disconnected" id=fac4434e4ba7b224f1fe4bad4ff70c9613eb32a8e54a01618d2bdcacb58c804f namespace=k8s.io Sep 6 00:06:50.948679 env[1848]: time="2025-09-06T00:06:50.948651450Z" level=info msg="cleaning up dead shim" Sep 6 00:06:50.962478 env[1848]: time="2025-09-06T00:06:50.962422690Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:06:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6761 runtime=io.containerd.runc.v2\n" Sep 6 00:06:51.416277 kubelet[2926]: I0906 00:06:51.415666 2926 scope.go:117] "RemoveContainer" containerID="c83364e7f01497ed0cb0db2086ad11f33d93d518ad01cd1c6209ec7f90815296" Sep 6 00:06:51.416277 kubelet[2926]: I0906 00:06:51.416114 2926 scope.go:117] "RemoveContainer" containerID="fac4434e4ba7b224f1fe4bad4ff70c9613eb32a8e54a01618d2bdcacb58c804f" Sep 6 00:06:51.419487 env[1848]: time="2025-09-06T00:06:51.419429401Z" level=info msg="RemoveContainer for \"c83364e7f01497ed0cb0db2086ad11f33d93d518ad01cd1c6209ec7f90815296\"" Sep 6 00:06:51.420529 kubelet[2926]: E0906 00:06:51.420454 2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-58fc44c59b-22x5l_tigera-operator(4369ba7f-d34b-44c2-b8d6-90fc3fea67de)\"" pod="tigera-operator/tigera-operator-58fc44c59b-22x5l" podUID="4369ba7f-d34b-44c2-b8d6-90fc3fea67de" Sep 6 00:06:51.430531 env[1848]: time="2025-09-06T00:06:51.430468635Z" level=info msg="RemoveContainer for \"c83364e7f01497ed0cb0db2086ad11f33d93d518ad01cd1c6209ec7f90815296\" returns successfully" Sep 6 00:06:57.592431 kubelet[2926]: E0906 00:06:57.592353 2926 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-61?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"