Sep 13 00:03:50.999590 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 13 00:03:50.999626 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Sep 12 23:05:37 -00 2025 Sep 13 00:03:50.999649 kernel: efi: EFI v2.70 by EDK II Sep 13 00:03:50.999664 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x716fcf98 Sep 13 00:03:50.999677 kernel: ACPI: Early table checksum verification disabled Sep 13 00:03:50.999691 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 13 00:03:50.999707 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 13 00:03:50.999721 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 13 00:03:50.999735 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 13 00:03:50.999748 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 13 00:03:50.999766 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 13 00:03:50.999780 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 13 00:03:51.001871 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 13 00:03:51.001893 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 13 00:03:51.001910 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 13 00:03:51.001932 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 13 00:03:51.001947 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 13 00:03:51.001961 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 13 00:03:51.001976 kernel: printk: bootconsole [uart0] enabled Sep 13 00:03:51.001990 kernel: NUMA: Failed to initialise from firmware Sep 13 00:03:51.002005 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 13 00:03:51.002020 kernel: NUMA: NODE_DATA [mem 0x4b5843900-0x4b5848fff] Sep 13 00:03:51.002035 kernel: Zone ranges: Sep 13 00:03:51.002050 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 13 00:03:51.002064 kernel: DMA32 empty Sep 13 00:03:51.002078 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 13 00:03:51.002097 kernel: Movable zone start for each node Sep 13 00:03:51.002111 kernel: Early memory node ranges Sep 13 00:03:51.002126 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 13 00:03:51.002141 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 13 00:03:51.002155 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 13 00:03:51.002169 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 13 00:03:51.002183 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 13 00:03:51.002197 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 13 00:03:51.002212 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 13 00:03:51.002226 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 13 00:03:51.002240 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 13 00:03:51.002254 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 13 00:03:51.002273 kernel: psci: probing for conduit method from ACPI. Sep 13 00:03:51.002287 kernel: psci: PSCIv1.0 detected in firmware. Sep 13 00:03:51.002308 kernel: psci: Using standard PSCI v0.2 function IDs Sep 13 00:03:51.002323 kernel: psci: Trusted OS migration not required Sep 13 00:03:51.002338 kernel: psci: SMC Calling Convention v1.1 Sep 13 00:03:51.002357 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 13 00:03:51.002372 kernel: ACPI: SRAT not present Sep 13 00:03:51.002388 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Sep 13 00:03:51.002403 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Sep 13 00:03:51.002418 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 13 00:03:51.002433 kernel: Detected PIPT I-cache on CPU0 Sep 13 00:03:51.002449 kernel: CPU features: detected: GIC system register CPU interface Sep 13 00:03:51.002464 kernel: CPU features: detected: Spectre-v2 Sep 13 00:03:51.002479 kernel: CPU features: detected: Spectre-v3a Sep 13 00:03:51.002494 kernel: CPU features: detected: Spectre-BHB Sep 13 00:03:51.002508 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 13 00:03:51.002527 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 13 00:03:51.002542 kernel: CPU features: detected: ARM erratum 1742098 Sep 13 00:03:51.002557 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 13 00:03:51.002572 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Sep 13 00:03:51.002587 kernel: Policy zone: Normal Sep 13 00:03:51.002605 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=563df7b8a9b19b8c496587ae06f3c3ec1604a5105c3a3f313c9ccaa21d8055ca Sep 13 00:03:51.002621 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:03:51.002636 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:03:51.002651 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:03:51.002666 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:03:51.002685 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Sep 13 00:03:51.002701 kernel: Memory: 3824460K/4030464K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 206004K reserved, 0K cma-reserved) Sep 13 00:03:51.002716 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:03:51.002731 kernel: trace event string verifier disabled Sep 13 00:03:51.002746 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:03:51.002762 kernel: rcu: RCU event tracing is enabled. Sep 13 00:03:51.002777 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:03:51.002816 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:03:51.002835 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:03:51.002850 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:03:51.002866 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:03:51.002881 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 13 00:03:51.002901 kernel: GICv3: 96 SPIs implemented Sep 13 00:03:51.002916 kernel: GICv3: 0 Extended SPIs implemented Sep 13 00:03:51.002931 kernel: GICv3: Distributor has no Range Selector support Sep 13 00:03:51.002946 kernel: Root IRQ handler: gic_handle_irq Sep 13 00:03:51.002974 kernel: GICv3: 16 PPIs implemented Sep 13 00:03:51.002995 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 13 00:03:51.003010 kernel: ACPI: SRAT not present Sep 13 00:03:51.003025 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 13 00:03:51.003040 kernel: ITS@0x0000000010080000: allocated 8192 Devices @400090000 (indirect, esz 8, psz 64K, shr 1) Sep 13 00:03:51.003056 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000a0000 (flat, esz 8, psz 64K, shr 1) Sep 13 00:03:51.003071 kernel: GICv3: using LPI property table @0x00000004000b0000 Sep 13 00:03:51.003091 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 13 00:03:51.003107 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Sep 13 00:03:51.003122 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 13 00:03:51.003138 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 13 00:03:51.003153 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 13 00:03:51.003168 kernel: Console: colour dummy device 80x25 Sep 13 00:03:51.003184 kernel: printk: console [tty1] enabled Sep 13 00:03:51.003200 kernel: ACPI: Core revision 20210730 Sep 13 00:03:51.003215 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 13 00:03:51.003231 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:03:51.003251 kernel: LSM: Security Framework initializing Sep 13 00:03:51.003266 kernel: SELinux: Initializing. Sep 13 00:03:51.003282 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:03:51.003298 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:03:51.003314 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:03:51.003329 kernel: Platform MSI: ITS@0x10080000 domain created Sep 13 00:03:51.003345 kernel: PCI/MSI: ITS@0x10080000 domain created Sep 13 00:03:51.003361 kernel: Remapping and enabling EFI services. Sep 13 00:03:51.003376 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:03:51.003391 kernel: Detected PIPT I-cache on CPU1 Sep 13 00:03:51.003411 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 13 00:03:51.003427 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Sep 13 00:03:51.003442 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 13 00:03:51.003457 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:03:51.003473 kernel: SMP: Total of 2 processors activated. Sep 13 00:03:51.003488 kernel: CPU features: detected: 32-bit EL0 Support Sep 13 00:03:51.003503 kernel: CPU features: detected: 32-bit EL1 Support Sep 13 00:03:51.003519 kernel: CPU features: detected: CRC32 instructions Sep 13 00:03:51.003534 kernel: CPU: All CPU(s) started at EL1 Sep 13 00:03:51.003553 kernel: alternatives: patching kernel code Sep 13 00:03:51.003568 kernel: devtmpfs: initialized Sep 13 00:03:51.003593 kernel: KASLR disabled due to lack of seed Sep 13 00:03:51.003613 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:03:51.003630 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:03:51.003646 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:03:51.003661 kernel: SMBIOS 3.0.0 present. Sep 13 00:03:51.003678 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 13 00:03:51.003694 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:03:51.003709 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 13 00:03:51.003726 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 13 00:03:51.003746 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 13 00:03:51.003762 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:03:51.003778 kernel: audit: type=2000 audit(0.292:1): state=initialized audit_enabled=0 res=1 Sep 13 00:03:51.003813 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:03:51.003832 kernel: cpuidle: using governor menu Sep 13 00:03:51.003853 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 13 00:03:51.003870 kernel: ASID allocator initialised with 32768 entries Sep 13 00:03:51.003886 kernel: ACPI: bus type PCI registered Sep 13 00:03:51.003902 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:03:51.003918 kernel: Serial: AMBA PL011 UART driver Sep 13 00:03:51.003934 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:03:51.003950 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Sep 13 00:03:51.003966 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:03:51.003982 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Sep 13 00:03:51.004002 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:03:51.004018 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 13 00:03:51.004034 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:03:51.004050 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:03:51.004066 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:03:51.004082 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 00:03:51.004098 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 00:03:51.004114 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 00:03:51.004130 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:03:51.004146 kernel: ACPI: Interpreter enabled Sep 13 00:03:51.004166 kernel: ACPI: Using GIC for interrupt routing Sep 13 00:03:51.004181 kernel: ACPI: MCFG table detected, 1 entries Sep 13 00:03:51.004197 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 13 00:03:51.004475 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:03:51.004675 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 13 00:03:51.004898 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 13 00:03:51.005098 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 13 00:03:51.005301 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 13 00:03:51.005324 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 13 00:03:51.005341 kernel: acpiphp: Slot [1] registered Sep 13 00:03:51.005357 kernel: acpiphp: Slot [2] registered Sep 13 00:03:51.005373 kernel: acpiphp: Slot [3] registered Sep 13 00:03:51.005389 kernel: acpiphp: Slot [4] registered Sep 13 00:03:51.005406 kernel: acpiphp: Slot [5] registered Sep 13 00:03:51.005422 kernel: acpiphp: Slot [6] registered Sep 13 00:03:51.005438 kernel: acpiphp: Slot [7] registered Sep 13 00:03:51.005459 kernel: acpiphp: Slot [8] registered Sep 13 00:03:51.005475 kernel: acpiphp: Slot [9] registered Sep 13 00:03:51.005490 kernel: acpiphp: Slot [10] registered Sep 13 00:03:51.005506 kernel: acpiphp: Slot [11] registered Sep 13 00:03:51.005523 kernel: acpiphp: Slot [12] registered Sep 13 00:03:51.005538 kernel: acpiphp: Slot [13] registered Sep 13 00:03:51.005554 kernel: acpiphp: Slot [14] registered Sep 13 00:03:51.005570 kernel: acpiphp: Slot [15] registered Sep 13 00:03:51.005586 kernel: acpiphp: Slot [16] registered Sep 13 00:03:51.005606 kernel: acpiphp: Slot [17] registered Sep 13 00:03:51.005622 kernel: acpiphp: Slot [18] registered Sep 13 00:03:51.005638 kernel: acpiphp: Slot [19] registered Sep 13 00:03:51.009627 kernel: acpiphp: Slot [20] registered Sep 13 00:03:51.009664 kernel: acpiphp: Slot [21] registered Sep 13 00:03:51.009681 kernel: acpiphp: Slot [22] registered Sep 13 00:03:51.009698 kernel: acpiphp: Slot [23] registered Sep 13 00:03:51.009715 kernel: acpiphp: Slot [24] registered Sep 13 00:03:51.009732 kernel: acpiphp: Slot [25] registered Sep 13 00:03:51.009748 kernel: acpiphp: Slot [26] registered Sep 13 00:03:51.009772 kernel: acpiphp: Slot [27] registered Sep 13 00:03:51.009803 kernel: acpiphp: Slot [28] registered Sep 13 00:03:51.009826 kernel: acpiphp: Slot [29] registered Sep 13 00:03:51.009843 kernel: acpiphp: Slot [30] registered Sep 13 00:03:51.009859 kernel: acpiphp: Slot [31] registered Sep 13 00:03:51.009875 kernel: PCI host bridge to bus 0000:00 Sep 13 00:03:51.010135 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 13 00:03:51.031557 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 13 00:03:51.031834 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 13 00:03:51.032263 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 13 00:03:51.032519 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Sep 13 00:03:51.032737 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Sep 13 00:03:51.033062 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Sep 13 00:03:51.033278 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 13 00:03:51.033492 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Sep 13 00:03:51.033690 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 13 00:03:51.033931 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 13 00:03:51.034131 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Sep 13 00:03:51.034358 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Sep 13 00:03:51.034574 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Sep 13 00:03:51.034912 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 13 00:03:51.035176 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Sep 13 00:03:51.035377 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Sep 13 00:03:51.035577 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Sep 13 00:03:51.035773 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Sep 13 00:03:51.036002 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Sep 13 00:03:51.036185 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 13 00:03:51.036360 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 13 00:03:51.036540 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 13 00:03:51.036563 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 13 00:03:51.036580 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 13 00:03:51.036597 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 13 00:03:51.036613 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 13 00:03:51.036629 kernel: iommu: Default domain type: Translated Sep 13 00:03:51.036645 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 13 00:03:51.036662 kernel: vgaarb: loaded Sep 13 00:03:51.036678 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:03:51.036699 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:03:51.036715 kernel: PTP clock support registered Sep 13 00:03:51.036731 kernel: Registered efivars operations Sep 13 00:03:51.036747 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 13 00:03:51.036764 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:03:51.036781 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:03:51.036817 kernel: pnp: PnP ACPI init Sep 13 00:03:51.037023 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 13 00:03:51.037052 kernel: pnp: PnP ACPI: found 1 devices Sep 13 00:03:51.037069 kernel: NET: Registered PF_INET protocol family Sep 13 00:03:51.037086 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:03:51.037102 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:03:51.037119 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:03:51.037136 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:03:51.037152 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 13 00:03:51.037168 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:03:51.037185 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:03:51.037205 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:03:51.037221 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:03:51.037237 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:03:51.037253 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Sep 13 00:03:51.037270 kernel: kvm [1]: HYP mode not available Sep 13 00:03:51.037286 kernel: Initialise system trusted keyrings Sep 13 00:03:51.037302 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:03:51.037318 kernel: Key type asymmetric registered Sep 13 00:03:51.037334 kernel: Asymmetric key parser 'x509' registered Sep 13 00:03:51.037354 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 00:03:51.037371 kernel: io scheduler mq-deadline registered Sep 13 00:03:51.037387 kernel: io scheduler kyber registered Sep 13 00:03:51.037404 kernel: io scheduler bfq registered Sep 13 00:03:51.037599 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 13 00:03:51.037624 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 13 00:03:51.037640 kernel: ACPI: button: Power Button [PWRB] Sep 13 00:03:51.037657 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 13 00:03:51.037673 kernel: ACPI: button: Sleep Button [SLPB] Sep 13 00:03:51.037695 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:03:51.037712 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 13 00:03:51.056096 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 13 00:03:51.056151 kernel: printk: console [ttyS0] disabled Sep 13 00:03:51.056170 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 13 00:03:51.056201 kernel: printk: console [ttyS0] enabled Sep 13 00:03:51.056220 kernel: printk: bootconsole [uart0] disabled Sep 13 00:03:51.056237 kernel: thunder_xcv, ver 1.0 Sep 13 00:03:51.056254 kernel: thunder_bgx, ver 1.0 Sep 13 00:03:51.056279 kernel: nicpf, ver 1.0 Sep 13 00:03:51.056296 kernel: nicvf, ver 1.0 Sep 13 00:03:51.056527 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 13 00:03:51.056718 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-13T00:03:50 UTC (1757721830) Sep 13 00:03:51.056741 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 00:03:51.056758 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:03:51.056775 kernel: Segment Routing with IPv6 Sep 13 00:03:51.058849 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:03:51.058902 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:03:51.058920 kernel: Key type dns_resolver registered Sep 13 00:03:51.058938 kernel: registered taskstats version 1 Sep 13 00:03:51.058955 kernel: Loading compiled-in X.509 certificates Sep 13 00:03:51.058992 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: 47ac98e9306f36eebe4291d409359a5a5d0c2b9c' Sep 13 00:03:51.059010 kernel: Key type .fscrypt registered Sep 13 00:03:51.059027 kernel: Key type fscrypt-provisioning registered Sep 13 00:03:51.059043 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:03:51.059059 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:03:51.059082 kernel: ima: No architecture policies found Sep 13 00:03:51.059098 kernel: clk: Disabling unused clocks Sep 13 00:03:51.059114 kernel: Freeing unused kernel memory: 36416K Sep 13 00:03:51.059130 kernel: Run /init as init process Sep 13 00:03:51.059147 kernel: with arguments: Sep 13 00:03:51.059164 kernel: /init Sep 13 00:03:51.059180 kernel: with environment: Sep 13 00:03:51.059197 kernel: HOME=/ Sep 13 00:03:51.059213 kernel: TERM=linux Sep 13 00:03:51.059233 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:03:51.059255 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:03:51.059276 systemd[1]: Detected virtualization amazon. Sep 13 00:03:51.059295 systemd[1]: Detected architecture arm64. Sep 13 00:03:51.059312 systemd[1]: Running in initrd. Sep 13 00:03:51.059329 systemd[1]: No hostname configured, using default hostname. Sep 13 00:03:51.059346 systemd[1]: Hostname set to . Sep 13 00:03:51.059369 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:03:51.059386 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:03:51.059404 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:03:51.059421 systemd[1]: Reached target cryptsetup.target. Sep 13 00:03:51.059438 systemd[1]: Reached target paths.target. Sep 13 00:03:51.059456 systemd[1]: Reached target slices.target. Sep 13 00:03:51.059473 systemd[1]: Reached target swap.target. Sep 13 00:03:51.059490 systemd[1]: Reached target timers.target. Sep 13 00:03:51.059512 systemd[1]: Listening on iscsid.socket. Sep 13 00:03:51.059530 systemd[1]: Listening on iscsiuio.socket. Sep 13 00:03:51.059548 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:03:51.059565 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:03:51.059583 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:03:51.059600 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:03:51.059617 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:03:51.059635 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:03:51.059656 systemd[1]: Reached target sockets.target. Sep 13 00:03:51.059674 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:03:51.059691 systemd[1]: Finished network-cleanup.service. Sep 13 00:03:51.059709 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:03:51.059726 systemd[1]: Starting systemd-journald.service... Sep 13 00:03:51.059743 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:03:51.059761 systemd[1]: Starting systemd-resolved.service... Sep 13 00:03:51.059778 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 00:03:51.064627 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:03:51.064672 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:03:51.064693 kernel: audit: type=1130 audit(1757721830.985:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.064714 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:03:51.064733 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 00:03:51.064752 kernel: audit: type=1130 audit(1757721831.022:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.064770 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 00:03:51.064814 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:03:51.064839 kernel: audit: type=1130 audit(1757721831.045:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.064867 systemd-journald[310]: Journal started Sep 13 00:03:51.064968 systemd-journald[310]: Runtime Journal (/run/log/journal/ec2376f50f0c3bbd006a59a8b5052245) is 8.0M, max 75.4M, 67.4M free. Sep 13 00:03:50.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:50.965856 systemd-modules-load[311]: Inserted module 'overlay' Sep 13 00:03:51.069829 systemd[1]: Started systemd-journald.service. Sep 13 00:03:51.079191 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:03:51.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.092821 kernel: audit: type=1130 audit(1757721831.081:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.099202 systemd-resolved[312]: Positive Trust Anchors: Sep 13 00:03:51.099229 systemd-resolved[312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:03:51.099285 systemd-resolved[312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:03:51.102939 systemd-modules-load[311]: Inserted module 'br_netfilter' Sep 13 00:03:51.103811 kernel: Bridge firewalling registered Sep 13 00:03:51.129149 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 00:03:51.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.138495 systemd[1]: Starting dracut-cmdline.service... Sep 13 00:03:51.152070 kernel: audit: type=1130 audit(1757721831.135:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.171184 dracut-cmdline[327]: dracut-dracut-053 Sep 13 00:03:51.174750 kernel: SCSI subsystem initialized Sep 13 00:03:51.181588 dracut-cmdline[327]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=563df7b8a9b19b8c496587ae06f3c3ec1604a5105c3a3f313c9ccaa21d8055ca Sep 13 00:03:51.217334 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:03:51.217401 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:03:51.222261 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 00:03:51.227931 systemd-modules-load[311]: Inserted module 'dm_multipath' Sep 13 00:03:51.235974 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:03:51.245367 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:03:51.263873 kernel: audit: type=1130 audit(1757721831.242:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.269545 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:03:51.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.287828 kernel: audit: type=1130 audit(1757721831.277:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.371817 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:03:51.390829 kernel: iscsi: registered transport (tcp) Sep 13 00:03:51.418456 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:03:51.418526 kernel: QLogic iSCSI HBA Driver Sep 13 00:03:51.586832 kernel: random: crng init done Sep 13 00:03:51.587053 systemd-resolved[312]: Defaulting to hostname 'linux'. Sep 13 00:03:51.593018 systemd[1]: Started systemd-resolved.service. Sep 13 00:03:51.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.603218 systemd[1]: Reached target nss-lookup.target. Sep 13 00:03:51.617375 kernel: audit: type=1130 audit(1757721831.601:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.617700 systemd[1]: Finished dracut-cmdline.service. Sep 13 00:03:51.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.632157 systemd[1]: Starting dracut-pre-udev.service... Sep 13 00:03:51.641852 kernel: audit: type=1130 audit(1757721831.623:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:51.698835 kernel: raid6: neonx8 gen() 6423 MB/s Sep 13 00:03:51.716824 kernel: raid6: neonx8 xor() 4547 MB/s Sep 13 00:03:51.734832 kernel: raid6: neonx4 gen() 6602 MB/s Sep 13 00:03:51.752831 kernel: raid6: neonx4 xor() 4734 MB/s Sep 13 00:03:51.770823 kernel: raid6: neonx2 gen() 5838 MB/s Sep 13 00:03:51.788823 kernel: raid6: neonx2 xor() 4390 MB/s Sep 13 00:03:51.806823 kernel: raid6: neonx1 gen() 4523 MB/s Sep 13 00:03:51.824825 kernel: raid6: neonx1 xor() 3584 MB/s Sep 13 00:03:51.842833 kernel: raid6: int64x8 gen() 3435 MB/s Sep 13 00:03:51.860824 kernel: raid6: int64x8 xor() 2055 MB/s Sep 13 00:03:51.878825 kernel: raid6: int64x4 gen() 3845 MB/s Sep 13 00:03:51.896821 kernel: raid6: int64x4 xor() 2167 MB/s Sep 13 00:03:51.914822 kernel: raid6: int64x2 gen() 3619 MB/s Sep 13 00:03:51.932832 kernel: raid6: int64x2 xor() 1921 MB/s Sep 13 00:03:51.950831 kernel: raid6: int64x1 gen() 2755 MB/s Sep 13 00:03:51.970240 kernel: raid6: int64x1 xor() 1437 MB/s Sep 13 00:03:51.970270 kernel: raid6: using algorithm neonx4 gen() 6602 MB/s Sep 13 00:03:51.970294 kernel: raid6: .... xor() 4734 MB/s, rmw enabled Sep 13 00:03:51.972002 kernel: raid6: using neon recovery algorithm Sep 13 00:03:51.992130 kernel: xor: measuring software checksum speed Sep 13 00:03:51.992192 kernel: 8regs : 9299 MB/sec Sep 13 00:03:51.993960 kernel: 32regs : 11102 MB/sec Sep 13 00:03:51.995866 kernel: arm64_neon : 9570 MB/sec Sep 13 00:03:51.995897 kernel: xor: using function: 32regs (11102 MB/sec) Sep 13 00:03:52.093837 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Sep 13 00:03:52.111096 systemd[1]: Finished dracut-pre-udev.service. Sep 13 00:03:52.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:52.113000 audit: BPF prog-id=7 op=LOAD Sep 13 00:03:52.113000 audit: BPF prog-id=8 op=LOAD Sep 13 00:03:52.116160 systemd[1]: Starting systemd-udevd.service... Sep 13 00:03:52.145318 systemd-udevd[508]: Using default interface naming scheme 'v252'. Sep 13 00:03:52.154428 systemd[1]: Started systemd-udevd.service. Sep 13 00:03:52.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:52.169254 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 00:03:52.200183 dracut-pre-trigger[524]: rd.md=0: removing MD RAID activation Sep 13 00:03:52.261613 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 00:03:52.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:52.263180 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:03:52.366044 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:03:52.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:52.477197 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 13 00:03:52.477259 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 13 00:03:52.494117 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 13 00:03:52.494359 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 13 00:03:52.494564 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:f8:65:ba:e8:f1 Sep 13 00:03:52.497010 (udev-worker)[572]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:03:52.524210 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 13 00:03:52.524277 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 13 00:03:52.534833 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 13 00:03:52.546572 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:03:52.546623 kernel: GPT:9289727 != 16777215 Sep 13 00:03:52.546647 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:03:52.548691 kernel: GPT:9289727 != 16777215 Sep 13 00:03:52.549948 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:03:52.551750 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:03:52.624843 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (559) Sep 13 00:03:52.682389 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 00:03:52.732347 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 00:03:52.754935 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 00:03:52.759433 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 00:03:52.774757 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:03:52.783201 systemd[1]: Starting disk-uuid.service... Sep 13 00:03:52.801832 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:03:52.801973 disk-uuid[671]: Primary Header is updated. Sep 13 00:03:52.801973 disk-uuid[671]: Secondary Entries is updated. Sep 13 00:03:52.801973 disk-uuid[671]: Secondary Header is updated. Sep 13 00:03:53.834836 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:03:53.836271 disk-uuid[672]: The operation has completed successfully. Sep 13 00:03:54.002681 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:03:54.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:54.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:54.002926 systemd[1]: Finished disk-uuid.service. Sep 13 00:03:54.028933 systemd[1]: Starting verity-setup.service... Sep 13 00:03:54.068260 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 13 00:03:54.179312 systemd[1]: Found device dev-mapper-usr.device. Sep 13 00:03:54.184512 systemd[1]: Mounting sysusr-usr.mount... Sep 13 00:03:54.188328 systemd[1]: Finished verity-setup.service. Sep 13 00:03:54.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:54.285092 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 00:03:54.285927 systemd[1]: Mounted sysusr-usr.mount. Sep 13 00:03:54.286307 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 00:03:54.287567 systemd[1]: Starting ignition-setup.service... Sep 13 00:03:54.289049 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 00:03:54.334878 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:03:54.334965 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:03:54.334992 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 13 00:03:54.350845 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:03:54.369601 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:03:54.387984 systemd[1]: Finished ignition-setup.service. Sep 13 00:03:54.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:54.392934 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 00:03:54.455236 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 00:03:54.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:54.458000 audit: BPF prog-id=9 op=LOAD Sep 13 00:03:54.461411 systemd[1]: Starting systemd-networkd.service... Sep 13 00:03:54.513873 systemd-networkd[1185]: lo: Link UP Sep 13 00:03:54.513895 systemd-networkd[1185]: lo: Gained carrier Sep 13 00:03:54.517702 systemd-networkd[1185]: Enumeration completed Sep 13 00:03:54.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:54.518186 systemd-networkd[1185]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:03:54.518431 systemd[1]: Started systemd-networkd.service. Sep 13 00:03:54.520435 systemd[1]: Reached target network.target. Sep 13 00:03:54.523757 systemd[1]: Starting iscsiuio.service... Sep 13 00:03:54.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:54.542741 systemd[1]: Started iscsiuio.service. Sep 13 00:03:54.553718 systemd-networkd[1185]: eth0: Link UP Sep 13 00:03:54.553727 systemd-networkd[1185]: eth0: Gained carrier Sep 13 00:03:54.555567 systemd[1]: Starting iscsid.service... Sep 13 00:03:54.565452 iscsid[1190]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:03:54.565452 iscsid[1190]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 00:03:54.565452 iscsid[1190]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 00:03:54.565452 iscsid[1190]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 00:03:54.584227 iscsid[1190]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:03:54.584227 iscsid[1190]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 00:03:54.595201 systemd[1]: Started iscsid.service. Sep 13 00:03:54.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:54.596221 systemd-networkd[1185]: eth0: DHCPv4 address 172.31.29.1/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 13 00:03:54.601310 systemd[1]: Starting dracut-initqueue.service... Sep 13 00:03:54.626022 systemd[1]: Finished dracut-initqueue.service. Sep 13 00:03:54.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:54.629712 systemd[1]: Reached target remote-fs-pre.target. Sep 13 00:03:54.629877 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:03:54.637142 systemd[1]: Reached target remote-fs.target. Sep 13 00:03:54.643231 systemd[1]: Starting dracut-pre-mount.service... Sep 13 00:03:54.662593 systemd[1]: Finished dracut-pre-mount.service. Sep 13 00:03:54.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:55.103087 ignition[1135]: Ignition 2.14.0 Sep 13 00:03:55.104881 ignition[1135]: Stage: fetch-offline Sep 13 00:03:55.106849 ignition[1135]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:03:55.109531 ignition[1135]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:03:55.139124 ignition[1135]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:03:55.140308 ignition[1135]: Ignition finished successfully Sep 13 00:03:55.144742 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 00:03:55.159606 kernel: kauditd_printk_skb: 17 callbacks suppressed Sep 13 00:03:55.159642 kernel: audit: type=1130 audit(1757721835.144:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:55.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:55.148269 systemd[1]: Starting ignition-fetch.service... Sep 13 00:03:55.169530 ignition[1209]: Ignition 2.14.0 Sep 13 00:03:55.169557 ignition[1209]: Stage: fetch Sep 13 00:03:55.169885 ignition[1209]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:03:55.169945 ignition[1209]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:03:55.185996 ignition[1209]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:03:55.188422 ignition[1209]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:03:55.196651 ignition[1209]: INFO : PUT result: OK Sep 13 00:03:55.199719 ignition[1209]: DEBUG : parsed url from cmdline: "" Sep 13 00:03:55.207135 ignition[1209]: INFO : no config URL provided Sep 13 00:03:55.207135 ignition[1209]: INFO : reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:03:55.207135 ignition[1209]: INFO : no config at "/usr/lib/ignition/user.ign" Sep 13 00:03:55.207135 ignition[1209]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:03:55.216881 ignition[1209]: INFO : PUT result: OK Sep 13 00:03:55.218618 ignition[1209]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 13 00:03:55.221956 ignition[1209]: INFO : GET result: OK Sep 13 00:03:55.223596 ignition[1209]: DEBUG : parsing config with SHA512: 2f8bafe3ed8b5195f886d3417ddb8cf46be13be0f68e728ea20753c8fbe7760d0d580d8d71fccdc01884a12f0b839bc8ef654d83f0fd59ee11b8d7b06e9cfd6e Sep 13 00:03:55.236254 unknown[1209]: fetched base config from "system" Sep 13 00:03:55.236285 unknown[1209]: fetched base config from "system" Sep 13 00:03:55.236300 unknown[1209]: fetched user config from "aws" Sep 13 00:03:55.242571 ignition[1209]: fetch: fetch complete Sep 13 00:03:55.242598 ignition[1209]: fetch: fetch passed Sep 13 00:03:55.242701 ignition[1209]: Ignition finished successfully Sep 13 00:03:55.249312 systemd[1]: Finished ignition-fetch.service. Sep 13 00:03:55.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:55.254075 systemd[1]: Starting ignition-kargs.service... Sep 13 00:03:55.263250 kernel: audit: type=1130 audit(1757721835.251:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:55.276909 ignition[1215]: Ignition 2.14.0 Sep 13 00:03:55.276938 ignition[1215]: Stage: kargs Sep 13 00:03:55.277233 ignition[1215]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:03:55.277296 ignition[1215]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:03:55.292734 ignition[1215]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:03:55.295187 ignition[1215]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:03:55.298414 ignition[1215]: INFO : PUT result: OK Sep 13 00:03:55.303598 ignition[1215]: kargs: kargs passed Sep 13 00:03:55.303704 ignition[1215]: Ignition finished successfully Sep 13 00:03:55.308087 systemd[1]: Finished ignition-kargs.service. Sep 13 00:03:55.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:55.322869 kernel: audit: type=1130 audit(1757721835.309:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:55.320868 systemd[1]: Starting ignition-disks.service... Sep 13 00:03:55.336543 ignition[1221]: Ignition 2.14.0 Sep 13 00:03:55.336570 ignition[1221]: Stage: disks Sep 13 00:03:55.336899 ignition[1221]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:03:55.336953 ignition[1221]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:03:55.352013 ignition[1221]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:03:55.354664 ignition[1221]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:03:55.357743 ignition[1221]: INFO : PUT result: OK Sep 13 00:03:55.362453 ignition[1221]: disks: disks passed Sep 13 00:03:55.362547 ignition[1221]: Ignition finished successfully Sep 13 00:03:55.378907 kernel: audit: type=1130 audit(1757721835.367:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:55.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:55.364310 systemd[1]: Finished ignition-disks.service. Sep 13 00:03:55.370485 systemd[1]: Reached target initrd-root-device.target. Sep 13 00:03:55.380842 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:03:55.381330 systemd[1]: Reached target local-fs.target. Sep 13 00:03:55.381699 systemd[1]: Reached target sysinit.target. Sep 13 00:03:55.382397 systemd[1]: Reached target basic.target. Sep 13 00:03:55.384250 systemd[1]: Starting systemd-fsck-root.service... Sep 13 00:03:55.432761 systemd-fsck[1229]: ROOT: clean, 629/553520 files, 56027/553472 blocks Sep 13 00:03:55.437488 systemd[1]: Finished systemd-fsck-root.service. Sep 13 00:03:55.441369 systemd[1]: Mounting sysroot.mount... Sep 13 00:03:55.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:55.453828 kernel: audit: type=1130 audit(1757721835.435:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:55.470821 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 00:03:55.472097 systemd[1]: Mounted sysroot.mount. Sep 13 00:03:55.475411 systemd[1]: Reached target initrd-root-fs.target. Sep 13 00:03:55.481333 systemd[1]: Mounting sysroot-usr.mount... Sep 13 00:03:55.489203 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 13 00:03:55.489287 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:03:55.489343 systemd[1]: Reached target ignition-diskful.target. Sep 13 00:03:55.503143 systemd[1]: Mounted sysroot-usr.mount. Sep 13 00:03:55.523771 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:03:55.529218 systemd[1]: Starting initrd-setup-root.service... Sep 13 00:03:55.542546 initrd-setup-root[1251]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:03:55.560835 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1246) Sep 13 00:03:55.562332 initrd-setup-root[1259]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:03:55.572177 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:03:55.572215 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:03:55.572238 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 13 00:03:55.585094 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:03:55.586512 initrd-setup-root[1285]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:03:55.597548 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:03:55.601202 initrd-setup-root[1293]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:03:55.650068 systemd-networkd[1185]: eth0: Gained IPv6LL Sep 13 00:03:55.799698 systemd[1]: Finished initrd-setup-root.service. Sep 13 00:03:55.810323 kernel: audit: type=1130 audit(1757721835.798:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:55.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:55.811625 systemd[1]: Starting ignition-mount.service... Sep 13 00:03:55.816715 systemd[1]: Starting sysroot-boot.service... Sep 13 00:03:55.831238 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Sep 13 00:03:55.831416 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Sep 13 00:03:55.853356 ignition[1311]: INFO : Ignition 2.14.0 Sep 13 00:03:55.853356 ignition[1311]: INFO : Stage: mount Sep 13 00:03:55.856907 ignition[1311]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:03:55.856907 ignition[1311]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:03:55.886254 systemd[1]: Finished sysroot-boot.service. Sep 13 00:03:55.887954 ignition[1311]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:03:55.887954 ignition[1311]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:03:55.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:55.896026 ignition[1311]: INFO : PUT result: OK Sep 13 00:03:55.904774 kernel: audit: type=1130 audit(1757721835.894:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:55.904047 systemd[1]: Finished ignition-mount.service. Sep 13 00:03:55.911122 ignition[1311]: INFO : mount: mount passed Sep 13 00:03:55.911122 ignition[1311]: INFO : Ignition finished successfully Sep 13 00:03:55.908056 systemd[1]: Starting ignition-files.service... Sep 13 00:03:55.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:55.926833 kernel: audit: type=1130 audit(1757721835.905:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:55.928436 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:03:55.950840 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by mount (1321) Sep 13 00:03:55.956804 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:03:55.956851 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:03:55.956885 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 13 00:03:55.968827 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:03:55.974126 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:03:56.004769 ignition[1340]: INFO : Ignition 2.14.0 Sep 13 00:03:56.006812 ignition[1340]: INFO : Stage: files Sep 13 00:03:56.008617 ignition[1340]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:03:56.011334 ignition[1340]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:03:56.027090 ignition[1340]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:03:56.029784 ignition[1340]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:03:56.032987 ignition[1340]: INFO : PUT result: OK Sep 13 00:03:56.038494 ignition[1340]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:03:56.043197 ignition[1340]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:03:56.043197 ignition[1340]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:03:56.080184 ignition[1340]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:03:56.083349 ignition[1340]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:03:56.087844 unknown[1340]: wrote ssh authorized keys file for user: core Sep 13 00:03:56.090224 ignition[1340]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:03:56.095247 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:03:56.099173 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:03:56.099173 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 13 00:03:56.099173 ignition[1340]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 13 00:03:56.178107 ignition[1340]: INFO : GET result: OK Sep 13 00:03:56.396948 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 13 00:03:56.403290 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:03:56.407034 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:03:56.410773 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:03:56.410773 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:03:56.421360 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Sep 13 00:03:56.421360 ignition[1340]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:03:56.436385 ignition[1340]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3001663827" Sep 13 00:03:56.436385 ignition[1340]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3001663827": device or resource busy Sep 13 00:03:56.436385 ignition[1340]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3001663827", trying btrfs: device or resource busy Sep 13 00:03:56.436385 ignition[1340]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3001663827" Sep 13 00:03:56.436385 ignition[1340]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3001663827" Sep 13 00:03:56.453313 ignition[1340]: INFO : op(3): [started] unmounting "/mnt/oem3001663827" Sep 13 00:03:56.453313 ignition[1340]: INFO : op(3): [finished] unmounting "/mnt/oem3001663827" Sep 13 00:03:56.453313 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Sep 13 00:03:56.453313 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:03:56.453313 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:03:56.453313 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:03:56.453313 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:03:56.453313 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:03:56.453313 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:03:56.453313 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:03:56.453313 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:03:56.453313 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 13 00:03:56.453313 ignition[1340]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:03:56.502335 ignition[1340]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3836141075" Sep 13 00:03:56.502335 ignition[1340]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3836141075": device or resource busy Sep 13 00:03:56.502335 ignition[1340]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3836141075", trying btrfs: device or resource busy Sep 13 00:03:56.502335 ignition[1340]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3836141075" Sep 13 00:03:56.502335 ignition[1340]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3836141075" Sep 13 00:03:56.502335 ignition[1340]: INFO : op(6): [started] unmounting "/mnt/oem3836141075" Sep 13 00:03:56.502335 ignition[1340]: INFO : op(6): [finished] unmounting "/mnt/oem3836141075" Sep 13 00:03:56.502335 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 13 00:03:56.502335 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:03:56.502335 ignition[1340]: INFO : GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 13 00:03:56.944525 ignition[1340]: INFO : GET result: OK Sep 13 00:03:57.464931 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:03:57.469937 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Sep 13 00:03:57.469937 ignition[1340]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:03:57.483017 ignition[1340]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem974416094" Sep 13 00:03:57.489910 ignition[1340]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem974416094": device or resource busy Sep 13 00:03:57.489910 ignition[1340]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem974416094", trying btrfs: device or resource busy Sep 13 00:03:57.489910 ignition[1340]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem974416094" Sep 13 00:03:57.489910 ignition[1340]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem974416094" Sep 13 00:03:57.489910 ignition[1340]: INFO : op(9): [started] unmounting "/mnt/oem974416094" Sep 13 00:03:57.502879 systemd[1]: mnt-oem974416094.mount: Deactivated successfully. Sep 13 00:03:57.510998 ignition[1340]: INFO : op(9): [finished] unmounting "/mnt/oem974416094" Sep 13 00:03:57.513479 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Sep 13 00:03:57.513479 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Sep 13 00:03:57.513479 ignition[1340]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:03:57.537628 ignition[1340]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2398531699" Sep 13 00:03:57.543239 ignition[1340]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2398531699": device or resource busy Sep 13 00:03:57.543239 ignition[1340]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2398531699", trying btrfs: device or resource busy Sep 13 00:03:57.543239 ignition[1340]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2398531699" Sep 13 00:03:57.543239 ignition[1340]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2398531699" Sep 13 00:03:57.558319 ignition[1340]: INFO : op(c): [started] unmounting "/mnt/oem2398531699" Sep 13 00:03:57.564213 ignition[1340]: INFO : op(c): [finished] unmounting "/mnt/oem2398531699" Sep 13 00:03:57.564213 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Sep 13 00:03:57.564213 ignition[1340]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Sep 13 00:03:57.564213 ignition[1340]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Sep 13 00:03:57.564213 ignition[1340]: INFO : files: op(11): [started] processing unit "amazon-ssm-agent.service" Sep 13 00:03:57.564213 ignition[1340]: INFO : files: op(11): op(12): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Sep 13 00:03:57.563220 systemd[1]: mnt-oem2398531699.mount: Deactivated successfully. Sep 13 00:03:57.591080 ignition[1340]: INFO : files: op(11): op(12): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Sep 13 00:03:57.591080 ignition[1340]: INFO : files: op(11): [finished] processing unit "amazon-ssm-agent.service" Sep 13 00:03:57.591080 ignition[1340]: INFO : files: op(13): [started] processing unit "nvidia.service" Sep 13 00:03:57.591080 ignition[1340]: INFO : files: op(13): [finished] processing unit "nvidia.service" Sep 13 00:03:57.591080 ignition[1340]: INFO : files: op(14): [started] processing unit "containerd.service" Sep 13 00:03:57.591080 ignition[1340]: INFO : files: op(14): op(15): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:03:57.591080 ignition[1340]: INFO : files: op(14): op(15): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:03:57.591080 ignition[1340]: INFO : files: op(14): [finished] processing unit "containerd.service" Sep 13 00:03:57.591080 ignition[1340]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Sep 13 00:03:57.591080 ignition[1340]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:03:57.591080 ignition[1340]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:03:57.591080 ignition[1340]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Sep 13 00:03:57.591080 ignition[1340]: INFO : files: op(18): [started] setting preset to enabled for "amazon-ssm-agent.service" Sep 13 00:03:57.591080 ignition[1340]: INFO : files: op(18): [finished] setting preset to enabled for "amazon-ssm-agent.service" Sep 13 00:03:57.591080 ignition[1340]: INFO : files: op(19): [started] setting preset to enabled for "nvidia.service" Sep 13 00:03:57.591080 ignition[1340]: INFO : files: op(19): [finished] setting preset to enabled for "nvidia.service" Sep 13 00:03:57.591080 ignition[1340]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:03:57.591080 ignition[1340]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:03:57.591080 ignition[1340]: INFO : files: op(1b): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 13 00:03:57.591080 ignition[1340]: INFO : files: op(1b): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 13 00:03:57.658518 ignition[1340]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:03:57.658518 ignition[1340]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:03:57.658518 ignition[1340]: INFO : files: files passed Sep 13 00:03:57.658518 ignition[1340]: INFO : Ignition finished successfully Sep 13 00:03:57.674708 systemd[1]: Finished ignition-files.service. Sep 13 00:03:57.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:57.685728 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 00:03:57.696144 kernel: audit: type=1130 audit(1757721837.676:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:57.690726 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 00:03:57.700314 systemd[1]: Starting ignition-quench.service... Sep 13 00:03:57.709129 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:03:57.709649 systemd[1]: Finished ignition-quench.service. Sep 13 00:03:57.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:57.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:57.723850 kernel: audit: type=1130 audit(1757721837.713:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:57.724825 initrd-setup-root-after-ignition[1365]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:03:57.729459 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 00:03:57.733929 systemd[1]: Reached target ignition-complete.target. Sep 13 00:03:57.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:57.739054 systemd[1]: Starting initrd-parse-etc.service... Sep 13 00:03:57.765338 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:03:57.767745 systemd[1]: Finished initrd-parse-etc.service. Sep 13 00:03:57.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:57.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:57.771568 systemd[1]: Reached target initrd-fs.target. Sep 13 00:03:57.775450 systemd[1]: Reached target initrd.target. Sep 13 00:03:57.778627 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 00:03:57.782918 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 00:03:57.804247 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 00:03:57.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:57.815226 systemd[1]: Starting initrd-cleanup.service... Sep 13 00:03:57.838626 systemd[1]: Stopped target nss-lookup.target. Sep 13 00:03:57.842235 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 00:03:57.846106 systemd[1]: Stopped target timers.target. Sep 13 00:03:57.849362 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:03:57.851686 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 00:03:57.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:57.855294 systemd[1]: Stopped target initrd.target. Sep 13 00:03:57.858598 systemd[1]: Stopped target basic.target. Sep 13 00:03:57.861764 systemd[1]: Stopped target ignition-complete.target. Sep 13 00:03:57.865459 systemd[1]: Stopped target ignition-diskful.target. Sep 13 00:03:57.872949 systemd[1]: Stopped target initrd-root-device.target. Sep 13 00:03:57.876768 systemd[1]: Stopped target remote-fs.target. Sep 13 00:03:57.880061 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 00:03:57.883688 systemd[1]: Stopped target sysinit.target. Sep 13 00:03:57.886912 systemd[1]: Stopped target local-fs.target. Sep 13 00:03:57.890132 systemd[1]: Stopped target local-fs-pre.target. Sep 13 00:03:57.893763 systemd[1]: Stopped target swap.target. Sep 13 00:03:57.896768 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:03:57.899007 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 00:03:57.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:57.902614 systemd[1]: Stopped target cryptsetup.target. Sep 13 00:03:57.906088 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:03:57.908236 systemd[1]: Stopped dracut-initqueue.service. Sep 13 00:03:57.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:57.911823 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:03:57.914320 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 00:03:57.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:57.921168 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:03:57.923287 systemd[1]: Stopped ignition-files.service. Sep 13 00:03:57.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:57.928274 systemd[1]: Stopping ignition-mount.service... Sep 13 00:03:57.950319 ignition[1378]: INFO : Ignition 2.14.0 Sep 13 00:03:57.950319 ignition[1378]: INFO : Stage: umount Sep 13 00:03:57.950319 ignition[1378]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:03:57.950319 ignition[1378]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:03:57.965783 iscsid[1190]: iscsid shutting down. Sep 13 00:03:57.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:57.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:57.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:57.954594 systemd[1]: Stopping iscsid.service... Sep 13 00:03:57.995104 ignition[1378]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:03:57.995104 ignition[1378]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:03:57.995104 ignition[1378]: INFO : PUT result: OK Sep 13 00:03:57.964256 systemd[1]: Stopping sysroot-boot.service... Sep 13 00:03:58.003849 ignition[1378]: INFO : umount: umount passed Sep 13 00:03:58.003849 ignition[1378]: INFO : Ignition finished successfully Sep 13 00:03:57.972871 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:03:57.973183 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 00:03:57.979355 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:03:57.979576 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 00:03:57.986619 systemd[1]: iscsid.service: Deactivated successfully. Sep 13 00:03:57.986842 systemd[1]: Stopped iscsid.service. Sep 13 00:03:57.994518 systemd[1]: Stopping iscsiuio.service... Sep 13 00:03:58.023133 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 00:03:58.023370 systemd[1]: Stopped iscsiuio.service. Sep 13 00:03:58.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.033150 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:03:58.035380 systemd[1]: Finished initrd-cleanup.service. Sep 13 00:03:58.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.039223 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:03:58.041770 systemd[1]: Stopped ignition-mount.service. Sep 13 00:03:58.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.049641 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:03:58.053582 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:03:58.053700 systemd[1]: Stopped ignition-disks.service. Sep 13 00:03:58.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.062641 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:03:58.062749 systemd[1]: Stopped ignition-kargs.service. Sep 13 00:03:58.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.067886 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:03:58.067971 systemd[1]: Stopped ignition-fetch.service. Sep 13 00:03:58.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.074613 systemd[1]: Stopped target network.target. Sep 13 00:03:58.079331 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:03:58.079447 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 00:03:58.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.083547 systemd[1]: Stopped target paths.target. Sep 13 00:03:58.086915 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:03:58.093880 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 00:03:58.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.095856 systemd[1]: Stopped target slices.target. Sep 13 00:03:58.097507 systemd[1]: Stopped target sockets.target. Sep 13 00:03:58.099292 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:03:58.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.099369 systemd[1]: Closed iscsid.socket. Sep 13 00:03:58.101770 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:03:58.101868 systemd[1]: Closed iscsiuio.socket. Sep 13 00:03:58.103880 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:03:58.103971 systemd[1]: Stopped ignition-setup.service. Sep 13 00:03:58.106242 systemd[1]: Stopping systemd-networkd.service... Sep 13 00:03:58.109055 systemd[1]: Stopping systemd-resolved.service... Sep 13 00:03:58.111866 systemd-networkd[1185]: eth0: DHCPv6 lease lost Sep 13 00:03:58.138000 audit: BPF prog-id=9 op=UNLOAD Sep 13 00:03:58.115022 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:03:58.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.115242 systemd[1]: Stopped systemd-networkd.service. Sep 13 00:03:58.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.118506 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:03:58.118575 systemd[1]: Closed systemd-networkd.socket. Sep 13 00:03:58.121683 systemd[1]: Stopping network-cleanup.service... Sep 13 00:03:58.141102 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:03:58.142680 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 00:03:58.144835 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:03:58.144934 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:03:58.151938 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:03:58.152035 systemd[1]: Stopped systemd-modules-load.service. Sep 13 00:03:58.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.173928 systemd[1]: Stopping systemd-udevd.service... Sep 13 00:03:58.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.191098 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:03:58.191315 systemd[1]: Stopped systemd-resolved.service. Sep 13 00:03:58.205000 audit: BPF prog-id=6 op=UNLOAD Sep 13 00:03:58.207622 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:03:58.210074 systemd[1]: Stopped network-cleanup.service. Sep 13 00:03:58.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.218378 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:03:58.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.218577 systemd[1]: Stopped sysroot-boot.service. Sep 13 00:03:58.222878 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:03:58.223175 systemd[1]: Stopped systemd-udevd.service. Sep 13 00:03:58.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.233120 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:03:58.233221 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 00:03:58.241799 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:03:58.242637 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 00:03:58.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.245305 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:03:58.246536 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 00:03:58.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.248393 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:03:58.248475 systemd[1]: Stopped dracut-cmdline.service. Sep 13 00:03:58.250650 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:03:58.250731 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 00:03:58.253805 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:03:58.253980 systemd[1]: Stopped initrd-setup-root.service. Sep 13 00:03:58.259271 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 00:03:58.264998 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:03:58.265557 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Sep 13 00:03:58.285156 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:03:58.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.285258 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 00:03:58.287151 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:03:58.287233 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 00:03:58.306968 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:03:58.307470 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 00:03:58.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:03:58.312009 systemd[1]: Reached target initrd-switch-root.target. Sep 13 00:03:58.319284 systemd[1]: Starting initrd-switch-root.service... Sep 13 00:03:58.356360 systemd[1]: Switching root. Sep 13 00:03:58.358000 audit: BPF prog-id=8 op=UNLOAD Sep 13 00:03:58.358000 audit: BPF prog-id=7 op=UNLOAD Sep 13 00:03:58.362000 audit: BPF prog-id=5 op=UNLOAD Sep 13 00:03:58.363000 audit: BPF prog-id=4 op=UNLOAD Sep 13 00:03:58.363000 audit: BPF prog-id=3 op=UNLOAD Sep 13 00:03:58.384845 systemd-journald[310]: Journal stopped Sep 13 00:04:04.305563 systemd-journald[310]: Received SIGTERM from PID 1 (systemd). Sep 13 00:04:04.305669 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 00:04:04.305712 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 00:04:04.305750 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 00:04:04.305782 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:04:04.305833 kernel: SELinux: policy capability open_perms=1 Sep 13 00:04:04.305865 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:04:04.305895 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:04:04.305926 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:04:04.305956 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:04:04.305986 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:04:04.306016 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:04:04.306050 systemd[1]: Successfully loaded SELinux policy in 119.426ms. Sep 13 00:04:04.306104 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.432ms. Sep 13 00:04:04.306139 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:04:04.306170 systemd[1]: Detected virtualization amazon. Sep 13 00:04:04.306203 systemd[1]: Detected architecture arm64. Sep 13 00:04:04.306233 systemd[1]: Detected first boot. Sep 13 00:04:04.306269 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:04:04.306302 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 00:04:04.306337 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:04:04.306369 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:04:04.306402 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:04:04.306435 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:04:04.306471 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:04:04.306502 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. Sep 13 00:04:04.306534 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 00:04:04.306565 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 00:04:04.306602 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Sep 13 00:04:04.306632 systemd[1]: Created slice system-getty.slice. Sep 13 00:04:04.306664 systemd[1]: Created slice system-modprobe.slice. Sep 13 00:04:04.306694 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 00:04:04.306723 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 00:04:04.306755 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 00:04:04.306786 systemd[1]: Created slice user.slice. Sep 13 00:04:04.306853 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:04:04.306885 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 00:04:04.306919 systemd[1]: Set up automount boot.automount. Sep 13 00:04:04.306950 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 00:04:04.306985 systemd[1]: Reached target integritysetup.target. Sep 13 00:04:04.307025 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:04:04.307057 systemd[1]: Reached target remote-fs.target. Sep 13 00:04:04.307086 systemd[1]: Reached target slices.target. Sep 13 00:04:04.307116 systemd[1]: Reached target swap.target. Sep 13 00:04:04.307145 systemd[1]: Reached target torcx.target. Sep 13 00:04:04.307180 systemd[1]: Reached target veritysetup.target. Sep 13 00:04:04.307213 systemd[1]: Listening on systemd-coredump.socket. Sep 13 00:04:04.307246 systemd[1]: Listening on systemd-initctl.socket. Sep 13 00:04:04.307283 kernel: kauditd_printk_skb: 57 callbacks suppressed Sep 13 00:04:04.307315 kernel: audit: type=1400 audit(1757721843.973:88): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:04:04.307344 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:04:04.307374 kernel: audit: type=1335 audit(1757721843.973:89): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 13 00:04:04.307408 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:04:04.307438 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:04:04.307467 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:04:04.307498 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:04:04.307529 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:04:04.307558 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 00:04:04.307590 systemd[1]: Mounting dev-hugepages.mount... Sep 13 00:04:04.307620 systemd[1]: Mounting dev-mqueue.mount... Sep 13 00:04:04.307655 systemd[1]: Mounting media.mount... Sep 13 00:04:04.307691 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 00:04:04.307722 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 00:04:04.307751 systemd[1]: Mounting tmp.mount... Sep 13 00:04:04.307780 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 00:04:04.307831 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:04:04.307863 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:04:04.307894 systemd[1]: Starting modprobe@configfs.service... Sep 13 00:04:04.307924 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:04:04.307953 systemd[1]: Starting modprobe@drm.service... Sep 13 00:04:04.307987 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:04:04.308019 systemd[1]: Starting modprobe@fuse.service... Sep 13 00:04:04.308048 systemd[1]: Starting modprobe@loop.service... Sep 13 00:04:04.308080 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:04:04.308112 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 13 00:04:04.308143 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 13 00:04:04.308172 systemd[1]: Starting systemd-journald.service... Sep 13 00:04:04.308201 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:04:04.308230 kernel: fuse: init (API version 7.34) Sep 13 00:04:04.308271 systemd[1]: Starting systemd-network-generator.service... Sep 13 00:04:04.308302 systemd[1]: Starting systemd-remount-fs.service... Sep 13 00:04:04.308337 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:04:04.308370 systemd[1]: Mounted dev-hugepages.mount. Sep 13 00:04:04.308401 systemd[1]: Mounted dev-mqueue.mount. Sep 13 00:04:04.308430 systemd[1]: Mounted media.mount. Sep 13 00:04:04.308460 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 00:04:04.308489 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 00:04:04.308518 kernel: loop: module loaded Sep 13 00:04:04.308550 systemd[1]: Mounted tmp.mount. Sep 13 00:04:04.308579 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:04:04.308610 kernel: audit: type=1130 audit(1757721844.264:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.327969 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:04:04.328019 systemd[1]: Finished modprobe@configfs.service. Sep 13 00:04:04.328050 kernel: audit: type=1130 audit(1757721844.283:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.328081 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:04:04.328112 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:04:04.328151 kernel: audit: type=1131 audit(1757721844.291:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.328180 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:04:04.328210 systemd[1]: Finished modprobe@drm.service. Sep 13 00:04:04.328239 kernel: audit: type=1305 audit(1757721844.299:93): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:04:04.328268 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:04:04.328302 systemd-journald[1529]: Journal started Sep 13 00:04:04.328410 systemd-journald[1529]: Runtime Journal (/run/log/journal/ec2376f50f0c3bbd006a59a8b5052245) is 8.0M, max 75.4M, 67.4M free. Sep 13 00:04:03.973000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:04:03.973000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 13 00:04:04.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.299000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:04:04.299000 audit[1529]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffd68b0850 a2=4000 a3=1 items=0 ppid=1 pid=1529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:04.346831 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:04:04.346932 kernel: audit: type=1300 audit(1757721844.299:93): arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffd68b0850 a2=4000 a3=1 items=0 ppid=1 pid=1529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:04.346978 systemd[1]: Started systemd-journald.service. Sep 13 00:04:04.353274 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:04:04.353666 systemd[1]: Finished modprobe@fuse.service. Sep 13 00:04:04.359642 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:04:04.360139 systemd[1]: Finished modprobe@loop.service. Sep 13 00:04:04.299000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:04:04.365708 kernel: audit: type=1327 audit(1757721844.299:93): proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:04:04.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.374053 kernel: audit: type=1130 audit(1757721844.309:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.374541 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:04:04.377258 systemd[1]: Finished systemd-network-generator.service. Sep 13 00:04:04.380934 systemd[1]: Finished systemd-remount-fs.service. Sep 13 00:04:04.383605 systemd[1]: Reached target network-pre.target. Sep 13 00:04:04.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.391807 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 00:04:04.396337 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 00:04:04.398218 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:04:04.407680 kernel: audit: type=1131 audit(1757721844.309:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.416590 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 00:04:04.423253 systemd[1]: Starting systemd-journal-flush.service... Sep 13 00:04:04.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.427106 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:04:04.429649 systemd[1]: Starting systemd-random-seed.service... Sep 13 00:04:04.435294 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:04:04.437950 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:04:04.444567 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 00:04:04.447130 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 00:04:04.471086 systemd-journald[1529]: Time spent on flushing to /var/log/journal/ec2376f50f0c3bbd006a59a8b5052245 is 66.497ms for 1074 entries. Sep 13 00:04:04.471086 systemd-journald[1529]: System Journal (/var/log/journal/ec2376f50f0c3bbd006a59a8b5052245) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:04:04.559157 systemd-journald[1529]: Received client request to flush runtime journal. Sep 13 00:04:04.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.484909 systemd[1]: Finished systemd-random-seed.service. Sep 13 00:04:04.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.487178 systemd[1]: Reached target first-boot-complete.target. Sep 13 00:04:04.506129 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:04:04.524135 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 00:04:04.528617 systemd[1]: Starting systemd-sysusers.service... Sep 13 00:04:04.561482 systemd[1]: Finished systemd-journal-flush.service. Sep 13 00:04:04.615885 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:04:04.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.621500 systemd[1]: Starting systemd-udev-settle.service... Sep 13 00:04:04.638649 udevadm[1581]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 13 00:04:04.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.726589 systemd[1]: Finished systemd-sysusers.service. Sep 13 00:04:04.733561 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:04:04.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:04.868009 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:04:05.347585 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 00:04:05.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.353429 systemd[1]: Starting systemd-udevd.service... Sep 13 00:04:05.393844 systemd-udevd[1587]: Using default interface naming scheme 'v252'. Sep 13 00:04:05.451939 systemd[1]: Started systemd-udevd.service. Sep 13 00:04:05.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.462174 systemd[1]: Starting systemd-networkd.service... Sep 13 00:04:05.476334 systemd[1]: Starting systemd-userdbd.service... Sep 13 00:04:05.539996 (udev-worker)[1590]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:04:05.550475 systemd[1]: Found device dev-ttyS0.device. Sep 13 00:04:05.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.599928 systemd[1]: Started systemd-userdbd.service. Sep 13 00:04:05.813876 systemd-networkd[1599]: lo: Link UP Sep 13 00:04:05.813899 systemd-networkd[1599]: lo: Gained carrier Sep 13 00:04:05.814975 systemd-networkd[1599]: Enumeration completed Sep 13 00:04:05.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.815199 systemd[1]: Started systemd-networkd.service. Sep 13 00:04:05.815200 systemd-networkd[1599]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:04:05.819855 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 00:04:05.827078 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:04:05.824374 systemd-networkd[1599]: eth0: Link UP Sep 13 00:04:05.824654 systemd-networkd[1599]: eth0: Gained carrier Sep 13 00:04:05.839857 systemd-networkd[1599]: eth0: DHCPv4 address 172.31.29.1/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 13 00:04:05.963316 systemd[1]: Finished systemd-udev-settle.service. Sep 13 00:04:05.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:05.975416 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:04:05.979892 systemd[1]: Starting lvm2-activation-early.service... Sep 13 00:04:06.035271 lvm[1707]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:04:06.073563 systemd[1]: Finished lvm2-activation-early.service. Sep 13 00:04:06.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:06.075843 systemd[1]: Reached target cryptsetup.target. Sep 13 00:04:06.080040 systemd[1]: Starting lvm2-activation.service... Sep 13 00:04:06.089781 lvm[1709]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:04:06.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:06.126661 systemd[1]: Finished lvm2-activation.service. Sep 13 00:04:06.128756 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:04:06.131079 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:04:06.131141 systemd[1]: Reached target local-fs.target. Sep 13 00:04:06.133218 systemd[1]: Reached target machines.target. Sep 13 00:04:06.137979 systemd[1]: Starting ldconfig.service... Sep 13 00:04:06.141151 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:04:06.141314 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:04:06.143757 systemd[1]: Starting systemd-boot-update.service... Sep 13 00:04:06.147734 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 00:04:06.154385 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 00:04:06.159356 systemd[1]: Starting systemd-sysext.service... Sep 13 00:04:06.168787 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1712 (bootctl) Sep 13 00:04:06.171090 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 00:04:06.194028 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 00:04:06.206536 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 00:04:06.207119 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 00:04:06.232945 kernel: loop0: detected capacity change from 0 to 203944 Sep 13 00:04:06.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:06.254303 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 00:04:06.352517 systemd-fsck[1725]: fsck.fat 4.2 (2021-01-31) Sep 13 00:04:06.352517 systemd-fsck[1725]: /dev/nvme0n1p1: 236 files, 117310/258078 clusters Sep 13 00:04:06.358842 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 00:04:06.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:06.368940 systemd[1]: Mounting boot.mount... Sep 13 00:04:06.377097 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:04:06.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:06.379983 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 00:04:06.416564 systemd[1]: Mounted boot.mount. Sep 13 00:04:06.443824 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:04:06.447424 systemd[1]: Finished systemd-boot-update.service. Sep 13 00:04:06.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:06.463836 kernel: loop1: detected capacity change from 0 to 203944 Sep 13 00:04:06.479243 (sd-sysext)[1746]: Using extensions 'kubernetes'. Sep 13 00:04:06.480566 (sd-sysext)[1746]: Merged extensions into '/usr'. Sep 13 00:04:06.519875 systemd[1]: Mounting usr-share-oem.mount... Sep 13 00:04:06.523996 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:04:06.526728 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:04:06.535533 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:04:06.545860 systemd[1]: Starting modprobe@loop.service... Sep 13 00:04:06.549903 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:04:06.550231 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:04:06.557000 systemd[1]: Mounted usr-share-oem.mount. Sep 13 00:04:06.563275 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:04:06.563656 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:04:06.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:06.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:06.569196 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:04:06.569555 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:04:06.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:06.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:06.575408 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:04:06.575903 systemd[1]: Finished modprobe@loop.service. Sep 13 00:04:06.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:06.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:06.584490 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:04:06.584725 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:04:06.587567 systemd[1]: Finished systemd-sysext.service. Sep 13 00:04:06.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:06.597056 systemd[1]: Starting ensure-sysext.service... Sep 13 00:04:06.603338 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 00:04:06.614015 systemd[1]: Reloading. Sep 13 00:04:06.629768 systemd-tmpfiles[1760]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 00:04:06.641180 systemd-tmpfiles[1760]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:04:06.655929 systemd-tmpfiles[1760]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:04:06.721169 /usr/lib/systemd/system-generators/torcx-generator[1780]: time="2025-09-13T00:04:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:04:06.722455 /usr/lib/systemd/system-generators/torcx-generator[1780]: time="2025-09-13T00:04:06Z" level=info msg="torcx already run" Sep 13 00:04:06.973581 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:04:06.973622 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:04:07.021651 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:04:07.170439 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 00:04:07.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:07.187018 systemd[1]: Starting audit-rules.service... Sep 13 00:04:07.194299 systemd[1]: Starting clean-ca-certificates.service... Sep 13 00:04:07.201247 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 00:04:07.209579 systemd[1]: Starting systemd-resolved.service... Sep 13 00:04:07.218728 systemd[1]: Starting systemd-timesyncd.service... Sep 13 00:04:07.228150 systemd[1]: Starting systemd-update-utmp.service... Sep 13 00:04:07.243018 systemd[1]: Finished clean-ca-certificates.service. Sep 13 00:04:07.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:07.256861 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:04:07.259679 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:04:07.268733 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:04:07.275517 systemd[1]: Starting modprobe@loop.service... Sep 13 00:04:07.279721 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:04:07.280191 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:04:07.280451 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:04:07.285358 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:04:07.285772 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:04:07.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:07.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:07.292361 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:04:07.292766 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:04:07.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:07.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:07.297253 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:04:07.301534 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:04:07.305436 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:04:07.316646 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:04:07.321051 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:04:07.321451 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:04:07.321749 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:04:07.330000 audit[1853]: SYSTEM_BOOT pid=1853 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 00:04:07.349488 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:04:07.349895 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:04:07.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:07.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:07.355658 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:04:07.356066 systemd[1]: Finished modprobe@loop.service. Sep 13 00:04:07.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:07.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:07.365054 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:04:07.369330 systemd[1]: Starting modprobe@drm.service... Sep 13 00:04:07.373375 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:04:07.373883 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:04:07.374269 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:04:07.374610 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:04:07.382259 systemd[1]: Finished ensure-sysext.service. Sep 13 00:04:07.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:07.387211 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 00:04:07.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:07.396180 systemd[1]: Finished systemd-update-utmp.service. Sep 13 00:04:07.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:07.403451 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:04:07.404262 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:04:07.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:07.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:07.410607 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:04:07.413215 systemd[1]: Finished modprobe@drm.service. Sep 13 00:04:07.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:07.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:07.419696 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:04:07.501000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 00:04:07.501000 audit[1881]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffdf93470 a2=420 a3=0 items=0 ppid=1844 pid=1881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:07.501000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 00:04:07.504517 augenrules[1881]: No rules Sep 13 00:04:07.506310 systemd[1]: Finished audit-rules.service. Sep 13 00:04:07.529199 ldconfig[1711]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:04:07.545700 systemd[1]: Finished ldconfig.service. Sep 13 00:04:07.552360 systemd[1]: Starting systemd-update-done.service... Sep 13 00:04:07.555957 systemd-networkd[1599]: eth0: Gained IPv6LL Sep 13 00:04:07.558945 systemd-resolved[1848]: Positive Trust Anchors: Sep 13 00:04:07.558973 systemd-resolved[1848]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:04:07.559026 systemd-resolved[1848]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:04:07.563727 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 00:04:07.577604 systemd[1]: Finished systemd-update-done.service. Sep 13 00:04:07.608316 systemd[1]: Started systemd-timesyncd.service. Sep 13 00:04:07.612003 systemd[1]: Reached target time-set.target. Sep 13 00:04:07.624412 systemd-resolved[1848]: Defaulting to hostname 'linux'. Sep 13 00:04:07.627679 systemd[1]: Started systemd-resolved.service. Sep 13 00:04:07.629676 systemd[1]: Reached target network.target. Sep 13 00:04:07.631516 systemd[1]: Reached target network-online.target. Sep 13 00:04:07.633520 systemd[1]: Reached target nss-lookup.target. Sep 13 00:04:07.635435 systemd[1]: Reached target sysinit.target. Sep 13 00:04:07.637452 systemd[1]: Started motdgen.path. Sep 13 00:04:07.639117 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 00:04:07.641857 systemd[1]: Started logrotate.timer. Sep 13 00:04:07.643641 systemd[1]: Started mdadm.timer. Sep 13 00:04:07.645209 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 00:04:07.647161 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:04:07.647226 systemd[1]: Reached target paths.target. Sep 13 00:04:07.648968 systemd[1]: Reached target timers.target. Sep 13 00:04:07.651218 systemd[1]: Listening on dbus.socket. Sep 13 00:04:07.655373 systemd[1]: Starting docker.socket... Sep 13 00:04:07.665667 systemd[1]: Listening on sshd.socket. Sep 13 00:04:07.667729 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:04:07.668439 systemd[1]: Listening on docker.socket. Sep 13 00:04:07.670267 systemd[1]: Reached target sockets.target. Sep 13 00:04:07.672114 systemd[1]: Reached target basic.target. Sep 13 00:04:07.674131 systemd[1]: System is tainted: cgroupsv1 Sep 13 00:04:07.674219 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:04:07.674268 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:04:07.676607 systemd[1]: Started amazon-ssm-agent.service. Sep 13 00:04:07.681855 systemd[1]: Starting containerd.service... Sep 13 00:04:07.688263 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Sep 13 00:04:07.711142 systemd[1]: Starting dbus.service... Sep 13 00:04:07.717046 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 00:04:07.724889 systemd[1]: Starting extend-filesystems.service... Sep 13 00:04:07.727402 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 00:04:07.735274 systemd[1]: Starting kubelet.service... Sep 13 00:04:07.744235 systemd[1]: Starting motdgen.service... Sep 13 00:04:07.750564 systemd[1]: Started nvidia.service. Sep 13 00:04:07.759299 systemd[1]: Starting prepare-helm.service... Sep 13 00:04:07.347815 jq[1900]: false Sep 13 00:04:07.515113 systemd-journald[1529]: Time jumped backwards, rotating. Sep 13 00:04:07.770010 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 00:04:07.775967 systemd[1]: Starting sshd-keygen.service... Sep 13 00:04:07.310759 systemd[1]: Starting systemd-logind.service... Sep 13 00:04:07.310976 systemd-timesyncd[1850]: Contacted time server 45.84.199.136:123 (0.flatcar.pool.ntp.org). Sep 13 00:04:07.311097 systemd-timesyncd[1850]: Initial clock synchronization to Sat 2025-09-13 00:04:07.306129 UTC. Sep 13 00:04:07.315579 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:04:07.315755 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:04:07.325792 systemd[1]: Starting update-engine.service... Sep 13 00:04:07.518402 jq[1914]: true Sep 13 00:04:07.349398 systemd-resolved[1848]: Clock change detected. Flushing caches. Sep 13 00:04:07.352740 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 00:04:07.519162 tar[1917]: linux-arm64/helm Sep 13 00:04:07.359532 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:04:07.360466 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 00:04:07.559752 extend-filesystems[1901]: Found loop1 Sep 13 00:04:07.559752 extend-filesystems[1901]: Found nvme0n1 Sep 13 00:04:07.559752 extend-filesystems[1901]: Found nvme0n1p1 Sep 13 00:04:07.559752 extend-filesystems[1901]: Found nvme0n1p2 Sep 13 00:04:07.559752 extend-filesystems[1901]: Found nvme0n1p3 Sep 13 00:04:07.559752 extend-filesystems[1901]: Found usr Sep 13 00:04:07.559752 extend-filesystems[1901]: Found nvme0n1p4 Sep 13 00:04:07.559752 extend-filesystems[1901]: Found nvme0n1p6 Sep 13 00:04:07.559752 extend-filesystems[1901]: Found nvme0n1p7 Sep 13 00:04:07.559752 extend-filesystems[1901]: Found nvme0n1p9 Sep 13 00:04:07.559752 extend-filesystems[1901]: Checking size of /dev/nvme0n1p9 Sep 13 00:04:07.459055 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:04:07.571404 dbus-daemon[1898]: [system] SELinux support is enabled Sep 13 00:04:07.633727 extend-filesystems[1901]: Resized partition /dev/nvme0n1p9 Sep 13 00:04:07.677242 jq[1923]: true Sep 13 00:04:07.460048 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 00:04:07.624855 dbus-daemon[1898]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1599 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 13 00:04:07.687008 extend-filesystems[1967]: resize2fs 1.46.5 (30-Dec-2021) Sep 13 00:04:07.571749 systemd[1]: Started dbus.service. Sep 13 00:04:07.673284 dbus-daemon[1898]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 13 00:04:07.578207 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:04:07.578253 systemd[1]: Reached target system-config.target. Sep 13 00:04:07.582004 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:04:07.582045 systemd[1]: Reached target user-config.target. Sep 13 00:04:07.633385 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:04:07.637869 systemd[1]: Finished motdgen.service. Sep 13 00:04:07.689884 systemd[1]: Starting systemd-hostnamed.service... Sep 13 00:04:07.723800 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 13 00:04:07.825489 bash[1968]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:04:07.827844 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 00:04:07.830791 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 13 00:04:07.848269 update_engine[1912]: I0913 00:04:07.847754 1912 main.cc:92] Flatcar Update Engine starting Sep 13 00:04:07.856568 extend-filesystems[1967]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 13 00:04:07.856568 extend-filesystems[1967]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:04:07.856568 extend-filesystems[1967]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 13 00:04:07.879282 extend-filesystems[1901]: Resized filesystem in /dev/nvme0n1p9 Sep 13 00:04:07.870614 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:04:07.882527 amazon-ssm-agent[1894]: 2025/09/13 00:04:07 Failed to load instance info from vault. RegistrationKey does not exist. Sep 13 00:04:07.882992 update_engine[1912]: I0913 00:04:07.879822 1912 update_check_scheduler.cc:74] Next update check in 5m24s Sep 13 00:04:07.871170 systemd[1]: Finished extend-filesystems.service. Sep 13 00:04:07.884039 systemd[1]: Started update-engine.service. Sep 13 00:04:07.893257 systemd[1]: Started locksmithd.service. Sep 13 00:04:07.914919 amazon-ssm-agent[1894]: Initializing new seelog logger Sep 13 00:04:07.915138 amazon-ssm-agent[1894]: New Seelog Logger Creation Complete Sep 13 00:04:07.915246 amazon-ssm-agent[1894]: 2025/09/13 00:04:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:04:07.915246 amazon-ssm-agent[1894]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:04:07.915681 amazon-ssm-agent[1894]: 2025/09/13 00:04:07 processing appconfig overrides Sep 13 00:04:08.016288 systemd[1]: nvidia.service: Deactivated successfully. Sep 13 00:04:08.096234 systemd-logind[1911]: Watching system buttons on /dev/input/event0 (Power Button) Sep 13 00:04:08.096289 systemd-logind[1911]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 13 00:04:08.096604 systemd-logind[1911]: New seat seat0. Sep 13 00:04:08.102701 systemd[1]: Started systemd-logind.service. Sep 13 00:04:08.224112 env[1920]: time="2025-09-13T00:04:08.223949687Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 00:04:08.348156 env[1920]: time="2025-09-13T00:04:08.348069204Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:04:08.348417 env[1920]: time="2025-09-13T00:04:08.348360768Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:04:08.357354 coreos-metadata[1897]: Sep 13 00:04:08.357 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 13 00:04:08.361001 coreos-metadata[1897]: Sep 13 00:04:08.360 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Sep 13 00:04:08.363052 coreos-metadata[1897]: Sep 13 00:04:08.362 INFO Fetch successful Sep 13 00:04:08.363052 coreos-metadata[1897]: Sep 13 00:04:08.363 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 13 00:04:08.365023 coreos-metadata[1897]: Sep 13 00:04:08.364 INFO Fetch successful Sep 13 00:04:08.368937 env[1920]: time="2025-09-13T00:04:08.368163360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:04:08.368937 env[1920]: time="2025-09-13T00:04:08.368234304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:04:08.370761 env[1920]: time="2025-09-13T00:04:08.368730060Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:04:08.370761 env[1920]: time="2025-09-13T00:04:08.369136428Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:04:08.370761 env[1920]: time="2025-09-13T00:04:08.369183540Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 00:04:08.370761 env[1920]: time="2025-09-13T00:04:08.369209580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:04:08.371400 env[1920]: time="2025-09-13T00:04:08.371351448Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:04:08.371508 unknown[1897]: wrote ssh authorized keys file for user: core Sep 13 00:04:08.373668 env[1920]: time="2025-09-13T00:04:08.373558212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:04:08.382231 env[1920]: time="2025-09-13T00:04:08.382168296Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:04:08.396306 env[1920]: time="2025-09-13T00:04:08.396246852Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:04:08.401843 update-ssh-keys[2047]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:04:08.398050 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Sep 13 00:04:08.402649 env[1920]: time="2025-09-13T00:04:08.402587472Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 00:04:08.419043 env[1920]: time="2025-09-13T00:04:08.418738332Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:04:08.432965 env[1920]: time="2025-09-13T00:04:08.432861384Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:04:08.433198 env[1920]: time="2025-09-13T00:04:08.433151532Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:04:08.433333 env[1920]: time="2025-09-13T00:04:08.433302804Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:04:08.433559 env[1920]: time="2025-09-13T00:04:08.433524684Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:04:08.440837 env[1920]: time="2025-09-13T00:04:08.435884952Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:04:08.440837 env[1920]: time="2025-09-13T00:04:08.435956592Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:04:08.440837 env[1920]: time="2025-09-13T00:04:08.435994680Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:04:08.440837 env[1920]: time="2025-09-13T00:04:08.436495920Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:04:08.440837 env[1920]: time="2025-09-13T00:04:08.436543140Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 00:04:08.440837 env[1920]: time="2025-09-13T00:04:08.436575348Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:04:08.440837 env[1920]: time="2025-09-13T00:04:08.436607616Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:04:08.440837 env[1920]: time="2025-09-13T00:04:08.436636476Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:04:08.440837 env[1920]: time="2025-09-13T00:04:08.436886412Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:04:08.440837 env[1920]: time="2025-09-13T00:04:08.437058348Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:04:08.440837 env[1920]: time="2025-09-13T00:04:08.437606016Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:04:08.440837 env[1920]: time="2025-09-13T00:04:08.437652648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:04:08.440837 env[1920]: time="2025-09-13T00:04:08.437685744Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:04:08.440837 env[1920]: time="2025-09-13T00:04:08.437925432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:04:08.441542 env[1920]: time="2025-09-13T00:04:08.437967324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:04:08.441542 env[1920]: time="2025-09-13T00:04:08.438004320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:04:08.441542 env[1920]: time="2025-09-13T00:04:08.438032724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:04:08.441542 env[1920]: time="2025-09-13T00:04:08.438064164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:04:08.441542 env[1920]: time="2025-09-13T00:04:08.438096732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:04:08.441542 env[1920]: time="2025-09-13T00:04:08.438125400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:04:08.441542 env[1920]: time="2025-09-13T00:04:08.438153744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:04:08.441542 env[1920]: time="2025-09-13T00:04:08.438186288Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:04:08.441542 env[1920]: time="2025-09-13T00:04:08.438611772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:04:08.441542 env[1920]: time="2025-09-13T00:04:08.438645888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:04:08.441542 env[1920]: time="2025-09-13T00:04:08.438675288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:04:08.441542 env[1920]: time="2025-09-13T00:04:08.438704208Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:04:08.441542 env[1920]: time="2025-09-13T00:04:08.438736200Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 00:04:08.441542 env[1920]: time="2025-09-13T00:04:08.438763068Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:04:08.448236 env[1920]: time="2025-09-13T00:04:08.438823560Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 00:04:08.448236 env[1920]: time="2025-09-13T00:04:08.438886908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:04:08.441999 systemd[1]: Started containerd.service. Sep 13 00:04:08.448441 env[1920]: time="2025-09-13T00:04:08.439203360Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:04:08.448441 env[1920]: time="2025-09-13T00:04:08.439650192Z" level=info msg="Connect containerd service" Sep 13 00:04:08.448441 env[1920]: time="2025-09-13T00:04:08.439720440Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:04:08.448441 env[1920]: time="2025-09-13T00:04:08.441050616Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:04:08.448441 env[1920]: time="2025-09-13T00:04:08.441617784Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:04:08.448441 env[1920]: time="2025-09-13T00:04:08.441711348Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:04:08.448441 env[1920]: time="2025-09-13T00:04:08.441848364Z" level=info msg="containerd successfully booted in 0.334420s" Sep 13 00:04:08.449708 dbus-daemon[1898]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 13 00:04:08.450118 systemd[1]: Started systemd-hostnamed.service. Sep 13 00:04:08.456335 dbus-daemon[1898]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1970 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 13 00:04:08.463034 env[1920]: time="2025-09-13T00:04:08.459463932Z" level=info msg="Start subscribing containerd event" Sep 13 00:04:08.463034 env[1920]: time="2025-09-13T00:04:08.459799260Z" level=info msg="Start recovering state" Sep 13 00:04:08.463034 env[1920]: time="2025-09-13T00:04:08.459951360Z" level=info msg="Start event monitor" Sep 13 00:04:08.463034 env[1920]: time="2025-09-13T00:04:08.459993972Z" level=info msg="Start snapshots syncer" Sep 13 00:04:08.463034 env[1920]: time="2025-09-13T00:04:08.460026780Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:04:08.463034 env[1920]: time="2025-09-13T00:04:08.460049076Z" level=info msg="Start streaming server" Sep 13 00:04:08.461611 systemd[1]: Starting polkit.service... Sep 13 00:04:08.491709 polkitd[2074]: Started polkitd version 121 Sep 13 00:04:08.537395 polkitd[2074]: Loading rules from directory /etc/polkit-1/rules.d Sep 13 00:04:08.537526 polkitd[2074]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 13 00:04:08.545157 polkitd[2074]: Finished loading, compiling and executing 2 rules Sep 13 00:04:08.550633 dbus-daemon[1898]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 13 00:04:08.551135 systemd[1]: Started polkit.service. Sep 13 00:04:08.556673 polkitd[2074]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 13 00:04:08.599207 systemd-hostnamed[1970]: Hostname set to (transient) Sep 13 00:04:08.599381 systemd-resolved[1848]: System hostname changed to 'ip-172-31-29-1'. Sep 13 00:04:08.875717 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO Create new startup processor Sep 13 00:04:08.895621 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [LongRunningPluginsManager] registered plugins: {} Sep 13 00:04:08.895621 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO Initializing bookkeeping folders Sep 13 00:04:08.896257 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO removing the completed state files Sep 13 00:04:08.896257 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO Initializing bookkeeping folders for long running plugins Sep 13 00:04:08.896257 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Sep 13 00:04:08.896257 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO Initializing healthcheck folders for long running plugins Sep 13 00:04:08.896257 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO Initializing locations for inventory plugin Sep 13 00:04:08.896257 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO Initializing default location for custom inventory Sep 13 00:04:08.896257 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO Initializing default location for file inventory Sep 13 00:04:08.896257 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO Initializing default location for role inventory Sep 13 00:04:08.896257 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO Init the cloudwatchlogs publisher Sep 13 00:04:08.896257 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [instanceID=i-09dc17fba2b1572f4] Successfully loaded platform independent plugin aws:softwareInventory Sep 13 00:04:08.896257 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [instanceID=i-09dc17fba2b1572f4] Successfully loaded platform independent plugin aws:configurePackage Sep 13 00:04:08.896257 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [instanceID=i-09dc17fba2b1572f4] Successfully loaded platform independent plugin aws:runPowerShellScript Sep 13 00:04:08.896257 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [instanceID=i-09dc17fba2b1572f4] Successfully loaded platform independent plugin aws:updateSsmAgent Sep 13 00:04:08.896257 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [instanceID=i-09dc17fba2b1572f4] Successfully loaded platform independent plugin aws:configureDocker Sep 13 00:04:08.896257 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [instanceID=i-09dc17fba2b1572f4] Successfully loaded platform independent plugin aws:runDockerAction Sep 13 00:04:08.896257 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [instanceID=i-09dc17fba2b1572f4] Successfully loaded platform independent plugin aws:refreshAssociation Sep 13 00:04:08.896257 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [instanceID=i-09dc17fba2b1572f4] Successfully loaded platform independent plugin aws:downloadContent Sep 13 00:04:08.896257 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [instanceID=i-09dc17fba2b1572f4] Successfully loaded platform independent plugin aws:runDocument Sep 13 00:04:08.896257 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [instanceID=i-09dc17fba2b1572f4] Successfully loaded platform dependent plugin aws:runShellScript Sep 13 00:04:08.896257 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Sep 13 00:04:08.897268 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO OS: linux, Arch: arm64 Sep 13 00:04:08.899814 amazon-ssm-agent[1894]: datastore file /var/lib/amazon/ssm/i-09dc17fba2b1572f4/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Sep 13 00:04:08.977350 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [MessagingDeliveryService] Starting document processing engine... Sep 13 00:04:09.071128 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [MessagingDeliveryService] [EngineProcessor] Starting Sep 13 00:04:09.166602 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Sep 13 00:04:09.260586 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [MessagingDeliveryService] Starting message polling Sep 13 00:04:09.354693 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [MessagingDeliveryService] Starting send replies to MDS Sep 13 00:04:09.409280 tar[1917]: linux-arm64/LICENSE Sep 13 00:04:09.411482 tar[1917]: linux-arm64/README.md Sep 13 00:04:09.424554 systemd[1]: Finished prepare-helm.service. Sep 13 00:04:09.449583 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [instanceID=i-09dc17fba2b1572f4] Starting association polling Sep 13 00:04:09.544746 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Sep 13 00:04:09.584834 locksmithd[1989]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:04:09.640066 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [MessagingDeliveryService] [Association] Launching response handler Sep 13 00:04:09.735662 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Sep 13 00:04:09.831321 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Sep 13 00:04:09.927222 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Sep 13 00:04:10.023370 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [MessageGatewayService] Starting session document processing engine... Sep 13 00:04:10.119613 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [MessageGatewayService] [EngineProcessor] Starting Sep 13 00:04:10.216090 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Sep 13 00:04:10.312825 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-09dc17fba2b1572f4, requestId: 8647ef4a-9049-4569-b706-e3c9d543d70e Sep 13 00:04:10.409631 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [OfflineService] Starting document processing engine... Sep 13 00:04:10.506736 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [OfflineService] [EngineProcessor] Starting Sep 13 00:04:10.604091 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [OfflineService] [EngineProcessor] Initial processing Sep 13 00:04:10.701517 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [OfflineService] Starting message polling Sep 13 00:04:10.799179 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [OfflineService] Starting send replies to MDS Sep 13 00:04:10.897086 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [LongRunningPluginsManager] starting long running plugin manager Sep 13 00:04:10.995088 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Sep 13 00:04:11.064514 systemd[1]: Started kubelet.service. Sep 13 00:04:11.093806 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [HealthCheck] HealthCheck reporting agent health. Sep 13 00:04:11.191956 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [MessageGatewayService] listening reply. Sep 13 00:04:11.290586 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Sep 13 00:04:11.389475 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [StartupProcessor] Executing startup processor tasks Sep 13 00:04:11.488559 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Sep 13 00:04:11.587738 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Sep 13 00:04:11.687147 amazon-ssm-agent[1894]: 2025-09-13 00:04:08 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.8 Sep 13 00:04:11.786867 amazon-ssm-agent[1894]: 2025-09-13 00:04:09 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-09dc17fba2b1572f4?role=subscribe&stream=input Sep 13 00:04:11.886612 amazon-ssm-agent[1894]: 2025-09-13 00:04:09 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-09dc17fba2b1572f4?role=subscribe&stream=input Sep 13 00:04:11.986662 amazon-ssm-agent[1894]: 2025-09-13 00:04:09 INFO [MessageGatewayService] Starting receiving message from control channel Sep 13 00:04:12.086944 amazon-ssm-agent[1894]: 2025-09-13 00:04:09 INFO [MessageGatewayService] [EngineProcessor] Initial processing Sep 13 00:04:12.506725 kubelet[2125]: E0913 00:04:12.506607 2125 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:04:12.510253 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:04:12.510646 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:04:13.103985 sshd_keygen[1948]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:04:13.142175 systemd[1]: Finished sshd-keygen.service. Sep 13 00:04:13.149130 systemd[1]: Starting issuegen.service... Sep 13 00:04:13.161619 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:04:13.162212 systemd[1]: Finished issuegen.service. Sep 13 00:04:13.167367 systemd[1]: Starting systemd-user-sessions.service... Sep 13 00:04:13.184582 systemd[1]: Finished systemd-user-sessions.service. Sep 13 00:04:13.189448 systemd[1]: Started getty@tty1.service. Sep 13 00:04:13.197061 systemd[1]: Started serial-getty@ttyS0.service. Sep 13 00:04:13.199392 systemd[1]: Reached target getty.target. Sep 13 00:04:13.201309 systemd[1]: Reached target multi-user.target. Sep 13 00:04:13.206178 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 00:04:13.221094 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 00:04:13.221914 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 00:04:13.224943 systemd[1]: Startup finished in 9.964s (kernel) + 14.322s (userspace) = 24.287s. Sep 13 00:04:15.843198 systemd[1]: Created slice system-sshd.slice. Sep 13 00:04:15.845569 systemd[1]: Started sshd@0-172.31.29.1:22-139.178.89.65:36580.service. Sep 13 00:04:16.135610 sshd[2151]: Accepted publickey for core from 139.178.89.65 port 36580 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:04:16.140712 sshd[2151]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:04:16.160990 systemd[1]: Created slice user-500.slice. Sep 13 00:04:16.163393 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 00:04:16.172906 systemd-logind[1911]: New session 1 of user core. Sep 13 00:04:16.188253 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 00:04:16.192733 systemd[1]: Starting user@500.service... Sep 13 00:04:16.202871 (systemd)[2156]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:04:16.394061 systemd[2156]: Queued start job for default target default.target. Sep 13 00:04:16.395851 systemd[2156]: Reached target paths.target. Sep 13 00:04:16.396076 systemd[2156]: Reached target sockets.target. Sep 13 00:04:16.396213 systemd[2156]: Reached target timers.target. Sep 13 00:04:16.396345 systemd[2156]: Reached target basic.target. Sep 13 00:04:16.396558 systemd[2156]: Reached target default.target. Sep 13 00:04:16.396687 systemd[1]: Started user@500.service. Sep 13 00:04:16.398346 systemd[2156]: Startup finished in 183ms. Sep 13 00:04:16.398742 systemd[1]: Started session-1.scope. Sep 13 00:04:16.543407 systemd[1]: Started sshd@1-172.31.29.1:22-139.178.89.65:36584.service. Sep 13 00:04:16.715080 sshd[2165]: Accepted publickey for core from 139.178.89.65 port 36584 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:04:16.717453 sshd[2165]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:04:16.726704 systemd[1]: Started session-2.scope. Sep 13 00:04:16.727150 systemd-logind[1911]: New session 2 of user core. Sep 13 00:04:16.858606 sshd[2165]: pam_unix(sshd:session): session closed for user core Sep 13 00:04:16.863571 systemd[1]: sshd@1-172.31.29.1:22-139.178.89.65:36584.service: Deactivated successfully. Sep 13 00:04:16.865000 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:04:16.868108 systemd-logind[1911]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:04:16.870961 systemd-logind[1911]: Removed session 2. Sep 13 00:04:16.884699 systemd[1]: Started sshd@2-172.31.29.1:22-139.178.89.65:36588.service. Sep 13 00:04:17.056907 sshd[2172]: Accepted publickey for core from 139.178.89.65 port 36588 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:04:17.059200 sshd[2172]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:04:17.067926 systemd[1]: Started session-3.scope. Sep 13 00:04:17.068323 systemd-logind[1911]: New session 3 of user core. Sep 13 00:04:17.192148 sshd[2172]: pam_unix(sshd:session): session closed for user core Sep 13 00:04:17.198850 systemd[1]: sshd@2-172.31.29.1:22-139.178.89.65:36588.service: Deactivated successfully. Sep 13 00:04:17.201105 systemd-logind[1911]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:04:17.201413 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:04:17.203621 systemd-logind[1911]: Removed session 3. Sep 13 00:04:17.217012 systemd[1]: Started sshd@3-172.31.29.1:22-139.178.89.65:36604.service. Sep 13 00:04:17.387241 sshd[2179]: Accepted publickey for core from 139.178.89.65 port 36604 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:04:17.389497 sshd[2179]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:04:17.398025 systemd-logind[1911]: New session 4 of user core. Sep 13 00:04:17.398333 systemd[1]: Started session-4.scope. Sep 13 00:04:17.530334 sshd[2179]: pam_unix(sshd:session): session closed for user core Sep 13 00:04:17.535401 systemd[1]: sshd@3-172.31.29.1:22-139.178.89.65:36604.service: Deactivated successfully. Sep 13 00:04:17.538040 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:04:17.539106 systemd-logind[1911]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:04:17.542286 systemd-logind[1911]: Removed session 4. Sep 13 00:04:17.556144 systemd[1]: Started sshd@4-172.31.29.1:22-139.178.89.65:36606.service. Sep 13 00:04:17.724730 sshd[2186]: Accepted publickey for core from 139.178.89.65 port 36606 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:04:17.727724 sshd[2186]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:04:17.736805 systemd[1]: Started session-5.scope. Sep 13 00:04:17.737880 systemd-logind[1911]: New session 5 of user core. Sep 13 00:04:17.893436 sudo[2190]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:04:17.894714 sudo[2190]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:04:17.918025 dbus-daemon[1898]: avc: received setenforce notice (enforcing=1) Sep 13 00:04:17.920098 sudo[2190]: pam_unix(sudo:session): session closed for user root Sep 13 00:04:17.945242 sshd[2186]: pam_unix(sshd:session): session closed for user core Sep 13 00:04:17.951977 systemd[1]: sshd@4-172.31.29.1:22-139.178.89.65:36606.service: Deactivated successfully. Sep 13 00:04:17.953496 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:04:17.955057 systemd-logind[1911]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:04:17.958797 systemd-logind[1911]: Removed session 5. Sep 13 00:04:17.971398 systemd[1]: Started sshd@5-172.31.29.1:22-139.178.89.65:36608.service. Sep 13 00:04:18.142325 sshd[2194]: Accepted publickey for core from 139.178.89.65 port 36608 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:04:18.145520 sshd[2194]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:04:18.154986 systemd[1]: Started session-6.scope. Sep 13 00:04:18.155439 systemd-logind[1911]: New session 6 of user core. Sep 13 00:04:18.266488 sudo[2199]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:04:18.267576 sudo[2199]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:04:18.273256 sudo[2199]: pam_unix(sudo:session): session closed for user root Sep 13 00:04:18.282902 sudo[2198]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:04:18.283989 sudo[2198]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:04:18.302871 systemd[1]: Stopping audit-rules.service... Sep 13 00:04:18.304000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 13 00:04:18.306011 auditctl[2202]: No rules Sep 13 00:04:18.307313 kernel: kauditd_printk_skb: 58 callbacks suppressed Sep 13 00:04:18.307410 kernel: audit: type=1305 audit(1757721858.304:152): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 13 00:04:18.313118 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:04:18.304000 audit[2202]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffcfa97b0 a2=420 a3=0 items=0 ppid=1 pid=2202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:18.313971 systemd[1]: Stopped audit-rules.service. Sep 13 00:04:18.319944 systemd[1]: Starting audit-rules.service... Sep 13 00:04:18.304000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Sep 13 00:04:18.329816 kernel: audit: type=1300 audit(1757721858.304:152): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffcfa97b0 a2=420 a3=0 items=0 ppid=1 pid=2202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:18.329941 kernel: audit: type=1327 audit(1757721858.304:152): proctitle=2F7362696E2F617564697463746C002D44 Sep 13 00:04:18.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:18.337996 kernel: audit: type=1131 audit(1757721858.313:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:18.359530 augenrules[2220]: No rules Sep 13 00:04:18.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:18.361242 systemd[1]: Finished audit-rules.service. Sep 13 00:04:18.369000 audit[2198]: USER_END pid=2198 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:04:18.370552 sudo[2198]: pam_unix(sudo:session): session closed for user root Sep 13 00:04:18.379620 kernel: audit: type=1130 audit(1757721858.360:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:18.379720 kernel: audit: type=1106 audit(1757721858.369:155): pid=2198 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:04:18.370000 audit[2198]: CRED_DISP pid=2198 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:04:18.387828 kernel: audit: type=1104 audit(1757721858.370:156): pid=2198 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:04:18.402129 sshd[2194]: pam_unix(sshd:session): session closed for user core Sep 13 00:04:18.404000 audit[2194]: USER_END pid=2194 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:04:18.408140 systemd[1]: sshd@5-172.31.29.1:22-139.178.89.65:36608.service: Deactivated successfully. Sep 13 00:04:18.409446 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:04:18.418532 kernel: audit: type=1106 audit(1757721858.404:157): pid=2194 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:04:18.418628 kernel: audit: type=1104 audit(1757721858.404:158): pid=2194 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:04:18.404000 audit[2194]: CRED_DISP pid=2194 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:04:18.417973 systemd-logind[1911]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:04:18.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.29.1:22-139.178.89.65:36608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:18.432370 systemd[1]: Started sshd@6-172.31.29.1:22-139.178.89.65:36614.service. Sep 13 00:04:18.436631 kernel: audit: type=1131 audit(1757721858.407:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.29.1:22-139.178.89.65:36608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:18.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.29.1:22-139.178.89.65:36614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:18.439272 systemd-logind[1911]: Removed session 6. Sep 13 00:04:18.607000 audit[2227]: USER_ACCT pid=2227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:04:18.608929 sshd[2227]: Accepted publickey for core from 139.178.89.65 port 36614 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:04:18.609000 audit[2227]: CRED_ACQ pid=2227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:04:18.610000 audit[2227]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe8f73350 a2=3 a3=1 items=0 ppid=1 pid=2227 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:18.610000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:04:18.612099 sshd[2227]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:04:18.619878 systemd-logind[1911]: New session 7 of user core. Sep 13 00:04:18.620746 systemd[1]: Started session-7.scope. Sep 13 00:04:18.631000 audit[2227]: USER_START pid=2227 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:04:18.634000 audit[2230]: CRED_ACQ pid=2230 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:04:18.729000 audit[2231]: USER_ACCT pid=2231 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:04:18.730706 sudo[2231]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:04:18.731000 audit[2231]: CRED_REFR pid=2231 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:04:18.732283 sudo[2231]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:04:18.735000 audit[2231]: USER_START pid=2231 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:04:18.812975 systemd[1]: Starting docker.service... Sep 13 00:04:18.937281 env[2241]: time="2025-09-13T00:04:18.937218600Z" level=info msg="Starting up" Sep 13 00:04:18.942439 env[2241]: time="2025-09-13T00:04:18.942386628Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:04:18.942439 env[2241]: time="2025-09-13T00:04:18.942432216Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:04:18.942607 env[2241]: time="2025-09-13T00:04:18.942474708Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:04:18.942607 env[2241]: time="2025-09-13T00:04:18.942498300Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:04:18.946297 env[2241]: time="2025-09-13T00:04:18.946233552Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:04:18.946297 env[2241]: time="2025-09-13T00:04:18.946276164Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:04:18.946491 env[2241]: time="2025-09-13T00:04:18.946310328Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:04:18.946491 env[2241]: time="2025-09-13T00:04:18.946336548Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:04:19.165723 env[2241]: time="2025-09-13T00:04:19.165551325Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 13 00:04:19.165723 env[2241]: time="2025-09-13T00:04:19.165619713Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 13 00:04:19.166030 env[2241]: time="2025-09-13T00:04:19.166006377Z" level=info msg="Loading containers: start." Sep 13 00:04:19.313000 audit[2271]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2271 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:19.313000 audit[2271]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffc39030f0 a2=0 a3=1 items=0 ppid=2241 pid=2271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:19.313000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Sep 13 00:04:19.317000 audit[2273]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2273 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:19.317000 audit[2273]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffffb24b5a0 a2=0 a3=1 items=0 ppid=2241 pid=2273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:19.317000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Sep 13 00:04:19.321000 audit[2275]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2275 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:19.321000 audit[2275]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe13cf190 a2=0 a3=1 items=0 ppid=2241 pid=2275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:19.321000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 13 00:04:19.325000 audit[2277]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2277 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:19.325000 audit[2277]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffcfe1a3d0 a2=0 a3=1 items=0 ppid=2241 pid=2277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:19.325000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 13 00:04:19.338000 audit[2279]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2279 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:19.338000 audit[2279]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff1975c20 a2=0 a3=1 items=0 ppid=2241 pid=2279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:19.338000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Sep 13 00:04:19.369000 audit[2284]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=2284 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:19.369000 audit[2284]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe1693ee0 a2=0 a3=1 items=0 ppid=2241 pid=2284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:19.369000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Sep 13 00:04:19.385000 audit[2286]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2286 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:19.385000 audit[2286]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe4cdd9e0 a2=0 a3=1 items=0 ppid=2241 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:19.385000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Sep 13 00:04:19.390000 audit[2288]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2288 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:19.390000 audit[2288]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffcf7e24e0 a2=0 a3=1 items=0 ppid=2241 pid=2288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:19.390000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Sep 13 00:04:19.394000 audit[2290]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=2290 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:19.394000 audit[2290]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=fffffeb1c930 a2=0 a3=1 items=0 ppid=2241 pid=2290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:19.394000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:04:19.412000 audit[2294]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=2294 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:19.412000 audit[2294]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=fffff7c4c3d0 a2=0 a3=1 items=0 ppid=2241 pid=2294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:19.412000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:04:19.419000 audit[2295]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2295 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:19.419000 audit[2295]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffff9105c80 a2=0 a3=1 items=0 ppid=2241 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:19.419000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:04:19.448817 kernel: Initializing XFRM netlink socket Sep 13 00:04:19.513964 env[2241]: time="2025-09-13T00:04:19.513917675Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 00:04:19.517200 (udev-worker)[2250]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:04:19.560000 audit[2303]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2303 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:19.560000 audit[2303]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=ffffc25ce130 a2=0 a3=1 items=0 ppid=2241 pid=2303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:19.560000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Sep 13 00:04:19.579000 audit[2306]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2306 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:19.579000 audit[2306]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=fffffc0b0cc0 a2=0 a3=1 items=0 ppid=2241 pid=2306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:19.579000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Sep 13 00:04:19.586000 audit[2309]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2309 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:19.586000 audit[2309]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffdc76eed0 a2=0 a3=1 items=0 ppid=2241 pid=2309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:19.586000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Sep 13 00:04:19.589000 audit[2311]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2311 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:19.589000 audit[2311]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffd7e3af30 a2=0 a3=1 items=0 ppid=2241 pid=2311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:19.589000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Sep 13 00:04:19.594000 audit[2313]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=2313 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:19.594000 audit[2313]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=fffffc693210 a2=0 a3=1 items=0 ppid=2241 pid=2313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:19.594000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Sep 13 00:04:19.598000 audit[2315]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=2315 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:19.598000 audit[2315]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffd7292230 a2=0 a3=1 items=0 ppid=2241 pid=2315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:19.598000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Sep 13 00:04:19.602000 audit[2317]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2317 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:19.602000 audit[2317]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=ffffcbb88d40 a2=0 a3=1 items=0 ppid=2241 pid=2317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:19.602000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Sep 13 00:04:19.622000 audit[2320]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=2320 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:19.622000 audit[2320]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffe5b117d0 a2=0 a3=1 items=0 ppid=2241 pid=2320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:19.622000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Sep 13 00:04:19.627000 audit[2322]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=2322 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:19.627000 audit[2322]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffd1929190 a2=0 a3=1 items=0 ppid=2241 pid=2322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:19.627000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 13 00:04:19.632000 audit[2324]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2324 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:19.632000 audit[2324]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffd3a3a2d0 a2=0 a3=1 items=0 ppid=2241 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:19.632000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 13 00:04:19.636000 audit[2326]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2326 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:19.636000 audit[2326]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc2074450 a2=0 a3=1 items=0 ppid=2241 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:19.636000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Sep 13 00:04:19.638283 systemd-networkd[1599]: docker0: Link UP Sep 13 00:04:19.657000 audit[2330]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=2330 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:19.657000 audit[2330]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd62238f0 a2=0 a3=1 items=0 ppid=2241 pid=2330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:19.657000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:04:19.664000 audit[2331]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2331 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:19.664000 audit[2331]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffc50a2c60 a2=0 a3=1 items=0 ppid=2241 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:19.664000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:04:19.666742 env[2241]: time="2025-09-13T00:04:19.666676944Z" level=info msg="Loading containers: done." Sep 13 00:04:19.698585 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2688759056-merged.mount: Deactivated successfully. Sep 13 00:04:19.717961 env[2241]: time="2025-09-13T00:04:19.717888840Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:04:19.718570 env[2241]: time="2025-09-13T00:04:19.718539360Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 00:04:19.718967 env[2241]: time="2025-09-13T00:04:19.718916208Z" level=info msg="Daemon has completed initialization" Sep 13 00:04:19.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:19.754677 systemd[1]: Started docker.service. Sep 13 00:04:19.771603 env[2241]: time="2025-09-13T00:04:19.771378000Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:04:21.264103 env[1920]: time="2025-09-13T00:04:21.264012564Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:04:21.902720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2818015075.mount: Deactivated successfully. Sep 13 00:04:22.762147 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:04:22.762480 systemd[1]: Stopped kubelet.service. Sep 13 00:04:22.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:22.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:22.765284 systemd[1]: Starting kubelet.service... Sep 13 00:04:23.111709 systemd[1]: Started kubelet.service. Sep 13 00:04:23.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:23.224627 kubelet[2372]: E0913 00:04:23.224537 2372 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:04:23.231158 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:04:23.231551 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:04:23.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:04:23.886132 env[1920]: time="2025-09-13T00:04:23.886051997Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:23.890717 env[1920]: time="2025-09-13T00:04:23.890655041Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:23.894795 env[1920]: time="2025-09-13T00:04:23.894708053Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:23.899000 env[1920]: time="2025-09-13T00:04:23.898936577Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:23.900685 env[1920]: time="2025-09-13T00:04:23.900636365Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 13 00:04:23.903122 env[1920]: time="2025-09-13T00:04:23.903073073Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:04:25.663797 env[1920]: time="2025-09-13T00:04:25.663675342Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:25.666926 env[1920]: time="2025-09-13T00:04:25.666873654Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:25.673848 env[1920]: time="2025-09-13T00:04:25.671539146Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:25.676089 env[1920]: time="2025-09-13T00:04:25.676009362Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:25.678036 env[1920]: time="2025-09-13T00:04:25.677989422Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 13 00:04:25.678878 env[1920]: time="2025-09-13T00:04:25.678833502Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:04:27.138060 env[1920]: time="2025-09-13T00:04:27.138002777Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:27.141996 env[1920]: time="2025-09-13T00:04:27.141948305Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:27.146864 env[1920]: time="2025-09-13T00:04:27.146207909Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:27.150735 env[1920]: time="2025-09-13T00:04:27.150665261Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:27.152673 env[1920]: time="2025-09-13T00:04:27.152623277Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 13 00:04:27.153628 env[1920]: time="2025-09-13T00:04:27.153554393Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:04:28.467218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount168355397.mount: Deactivated successfully. Sep 13 00:04:29.372494 env[1920]: time="2025-09-13T00:04:29.372407156Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:29.383925 env[1920]: time="2025-09-13T00:04:29.383868572Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:29.393255 env[1920]: time="2025-09-13T00:04:29.393201860Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:29.403461 env[1920]: time="2025-09-13T00:04:29.403380236Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:29.404654 env[1920]: time="2025-09-13T00:04:29.404590316Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 13 00:04:29.406498 env[1920]: time="2025-09-13T00:04:29.406428188Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:04:29.988421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3539282250.mount: Deactivated successfully. Sep 13 00:04:31.692093 env[1920]: time="2025-09-13T00:04:31.692014511Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:31.748077 env[1920]: time="2025-09-13T00:04:31.748021884Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:31.792093 env[1920]: time="2025-09-13T00:04:31.792001560Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:31.836497 env[1920]: time="2025-09-13T00:04:31.836427936Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:31.840897 env[1920]: time="2025-09-13T00:04:31.839751396Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 13 00:04:31.841678 env[1920]: time="2025-09-13T00:04:31.841618656Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:04:32.627346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1912457378.mount: Deactivated successfully. Sep 13 00:04:32.638471 env[1920]: time="2025-09-13T00:04:32.638417700Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:32.644580 env[1920]: time="2025-09-13T00:04:32.644535120Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:32.649467 env[1920]: time="2025-09-13T00:04:32.649420464Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:32.654503 env[1920]: time="2025-09-13T00:04:32.654453552Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:32.656019 env[1920]: time="2025-09-13T00:04:32.655966632Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 13 00:04:32.657433 env[1920]: time="2025-09-13T00:04:32.657371628Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:04:33.226164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount864107005.mount: Deactivated successfully. Sep 13 00:04:33.281577 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:04:33.295000 kernel: kauditd_printk_skb: 88 callbacks suppressed Sep 13 00:04:33.295161 kernel: audit: type=1130 audit(1757721873.281:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:33.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:33.281947 systemd[1]: Stopped kubelet.service. Sep 13 00:04:33.284652 systemd[1]: Starting kubelet.service... Sep 13 00:04:33.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:33.305784 kernel: audit: type=1131 audit(1757721873.281:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:33.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:33.682633 systemd[1]: Started kubelet.service. Sep 13 00:04:33.693881 kernel: audit: type=1130 audit(1757721873.682:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:33.816062 kubelet[2386]: E0913 00:04:33.816002 2386 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:04:33.820678 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:04:33.821104 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:04:33.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:04:33.831828 kernel: audit: type=1131 audit(1757721873.820:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:04:36.179228 env[1920]: time="2025-09-13T00:04:36.179153786Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:36.184999 env[1920]: time="2025-09-13T00:04:36.184934570Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:36.189252 env[1920]: time="2025-09-13T00:04:36.189189662Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:36.193474 env[1920]: time="2025-09-13T00:04:36.193400822Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:36.195452 env[1920]: time="2025-09-13T00:04:36.195375086Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 13 00:04:38.086546 amazon-ssm-agent[1894]: 2025-09-13 00:04:38 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Sep 13 00:04:38.633142 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 13 00:04:38.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:38.643820 kernel: audit: type=1131 audit(1757721878.632:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:43.272493 systemd[1]: Stopped kubelet.service. Sep 13 00:04:43.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:43.276593 systemd[1]: Starting kubelet.service... Sep 13 00:04:43.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:43.297557 kernel: audit: type=1130 audit(1757721883.272:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:43.298429 kernel: audit: type=1131 audit(1757721883.272:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:43.343360 systemd[1]: Reloading. Sep 13 00:04:43.510701 /usr/lib/systemd/system-generators/torcx-generator[2439]: time="2025-09-13T00:04:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:04:43.523584 /usr/lib/systemd/system-generators/torcx-generator[2439]: time="2025-09-13T00:04:43Z" level=info msg="torcx already run" Sep 13 00:04:43.744933 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:04:43.745182 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:04:43.788630 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:04:43.995644 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:04:43.996075 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:04:43.996876 systemd[1]: Stopped kubelet.service. Sep 13 00:04:43.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:04:44.007978 systemd[1]: Starting kubelet.service... Sep 13 00:04:44.008829 kernel: audit: type=1130 audit(1757721883.996:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:04:44.314480 systemd[1]: Started kubelet.service. Sep 13 00:04:44.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:44.330280 kernel: audit: type=1130 audit(1757721884.314:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:44.412563 kubelet[2514]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:04:44.412563 kubelet[2514]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:04:44.412563 kubelet[2514]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:04:44.413234 kubelet[2514]: I0913 00:04:44.412670 2514 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:04:45.580291 kubelet[2514]: I0913 00:04:45.580242 2514 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:04:45.580924 kubelet[2514]: I0913 00:04:45.580872 2514 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:04:45.581340 kubelet[2514]: I0913 00:04:45.581296 2514 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:04:45.654837 kubelet[2514]: I0913 00:04:45.654730 2514 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:04:45.655215 kubelet[2514]: E0913 00:04:45.655175 2514 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.29.1:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.29.1:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:45.669277 kubelet[2514]: E0913 00:04:45.669223 2514 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:04:45.669529 kubelet[2514]: I0913 00:04:45.669502 2514 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:04:45.679004 kubelet[2514]: I0913 00:04:45.678946 2514 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:04:45.684213 kubelet[2514]: I0913 00:04:45.684158 2514 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:04:45.684486 kubelet[2514]: I0913 00:04:45.684426 2514 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:04:45.684788 kubelet[2514]: I0913 00:04:45.684479 2514 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:04:45.684977 kubelet[2514]: I0913 00:04:45.684923 2514 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:04:45.684977 kubelet[2514]: I0913 00:04:45.684947 2514 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:04:45.685287 kubelet[2514]: I0913 00:04:45.685246 2514 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:04:45.697011 kubelet[2514]: I0913 00:04:45.696975 2514 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:04:45.697426 kubelet[2514]: I0913 00:04:45.697405 2514 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:04:45.697576 kubelet[2514]: W0913 00:04:45.697301 2514 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.29.1:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-1&limit=500&resourceVersion=0": dial tcp 172.31.29.1:6443: connect: connection refused Sep 13 00:04:45.697659 kubelet[2514]: E0913 00:04:45.697600 2514 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.29.1:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-1&limit=500&resourceVersion=0\": dial tcp 172.31.29.1:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:45.697790 kubelet[2514]: I0913 00:04:45.697751 2514 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:04:45.697929 kubelet[2514]: I0913 00:04:45.697908 2514 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:04:45.710181 kubelet[2514]: W0913 00:04:45.710110 2514 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.29.1:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.29.1:6443: connect: connection refused Sep 13 00:04:45.710456 kubelet[2514]: E0913 00:04:45.710417 2514 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.29.1:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.1:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:45.711042 kubelet[2514]: I0913 00:04:45.711014 2514 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:04:45.712489 kubelet[2514]: I0913 00:04:45.712458 2514 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:04:45.712968 kubelet[2514]: W0913 00:04:45.712947 2514 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:04:45.716939 kubelet[2514]: I0913 00:04:45.716900 2514 server.go:1274] "Started kubelet" Sep 13 00:04:45.717505 kubelet[2514]: I0913 00:04:45.717427 2514 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:04:45.719535 kubelet[2514]: I0913 00:04:45.719488 2514 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:04:45.723108 kubelet[2514]: I0913 00:04:45.723024 2514 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:04:45.723636 kubelet[2514]: I0913 00:04:45.723607 2514 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:04:45.723000 audit[2514]: AVC avc: denied { mac_admin } for pid=2514 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:04:45.729602 kubelet[2514]: E0913 00:04:45.727503 2514 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.1:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.1:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-1.1864aebe04155e85 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-1,UID:ip-172-31-29-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-1,},FirstTimestamp:2025-09-13 00:04:45.716864645 +0000 UTC m=+1.380955838,LastTimestamp:2025-09-13 00:04:45.716864645 +0000 UTC m=+1.380955838,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-1,}" Sep 13 00:04:45.732198 kubelet[2514]: I0913 00:04:45.732057 2514 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 13 00:04:45.732198 kubelet[2514]: I0913 00:04:45.732173 2514 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 13 00:04:45.732413 kubelet[2514]: I0913 00:04:45.732356 2514 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:04:45.732850 kubelet[2514]: E0913 00:04:45.732816 2514 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:04:45.733351 kubelet[2514]: I0913 00:04:45.733323 2514 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:04:45.723000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:04:45.738378 kernel: audit: type=1400 audit(1757721885.723:207): avc: denied { mac_admin } for pid=2514 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:04:45.738513 kernel: audit: type=1401 audit(1757721885.723:207): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:04:45.723000 audit[2514]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40009291a0 a1=400082d2f0 a2=4000929170 a3=25 items=0 ppid=1 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:45.740386 kubelet[2514]: I0913 00:04:45.740358 2514 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:04:45.740727 kubelet[2514]: I0913 00:04:45.740705 2514 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:04:45.740933 kubelet[2514]: I0913 00:04:45.740915 2514 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:04:45.743502 kubelet[2514]: I0913 00:04:45.743470 2514 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:04:45.743840 kubelet[2514]: I0913 00:04:45.743810 2514 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:04:45.744563 kubelet[2514]: W0913 00:04:45.744512 2514 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.29.1:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.1:6443: connect: connection refused Sep 13 00:04:45.744737 kubelet[2514]: E0913 00:04:45.744705 2514 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.29.1:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.1:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:45.747217 kubelet[2514]: I0913 00:04:45.747186 2514 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:04:45.749793 kernel: audit: type=1300 audit(1757721885.723:207): arch=c00000b7 syscall=5 success=no exit=-22 a0=40009291a0 a1=400082d2f0 a2=4000929170 a3=25 items=0 ppid=1 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:45.750167 kubelet[2514]: E0913 00:04:45.750121 2514 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-1\" not found" Sep 13 00:04:45.752809 kernel: audit: type=1327 audit(1757721885.723:207): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:04:45.723000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:04:45.731000 audit[2514]: AVC avc: denied { mac_admin } for pid=2514 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:04:45.771218 kernel: audit: type=1400 audit(1757721885.731:208): avc: denied { mac_admin } for pid=2514 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:04:45.731000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:04:45.775616 kernel: audit: type=1401 audit(1757721885.731:208): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:04:45.731000 audit[2514]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40008f5140 a1=400082d308 a2=4000929230 a3=25 items=0 ppid=1 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:45.787483 kernel: audit: type=1300 audit(1757721885.731:208): arch=c00000b7 syscall=5 success=no exit=-22 a0=40008f5140 a1=400082d308 a2=4000929230 a3=25 items=0 ppid=1 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:45.788004 kubelet[2514]: E0913 00:04:45.787920 2514 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-1?timeout=10s\": dial tcp 172.31.29.1:6443: connect: connection refused" interval="200ms" Sep 13 00:04:45.731000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:04:45.800990 kernel: audit: type=1327 audit(1757721885.731:208): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:04:45.751000 audit[2525]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2525 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:45.751000 audit[2525]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc7d9fb70 a2=0 a3=1 items=0 ppid=2514 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:45.751000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 13 00:04:45.760000 audit[2526]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2526 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:45.760000 audit[2526]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdd5115e0 a2=0 a3=1 items=0 ppid=2514 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:45.760000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 13 00:04:45.772000 audit[2528]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2528 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:45.772000 audit[2528]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe5b53aa0 a2=0 a3=1 items=0 ppid=2514 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:45.772000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:04:45.806000 audit[2534]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2534 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:45.806000 audit[2534]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe5d577f0 a2=0 a3=1 items=0 ppid=2514 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:45.806000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:04:45.830000 audit[2538]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2538 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:45.830000 audit[2538]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffff6884ea0 a2=0 a3=1 items=0 ppid=2514 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:45.830000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Sep 13 00:04:45.835299 kubelet[2514]: I0913 00:04:45.832205 2514 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:04:45.836000 audit[2540]: NETFILTER_CFG table=mangle:31 family=2 entries=1 op=nft_register_chain pid=2540 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:45.836000 audit[2540]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffece4c950 a2=0 a3=1 items=0 ppid=2514 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:45.836000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 13 00:04:45.836000 audit[2539]: NETFILTER_CFG table=mangle:32 family=10 entries=2 op=nft_register_chain pid=2539 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:04:45.836000 audit[2539]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe7e9ab20 a2=0 a3=1 items=0 ppid=2514 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:45.836000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 13 00:04:45.839331 kubelet[2514]: I0913 00:04:45.839274 2514 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:04:45.839331 kubelet[2514]: I0913 00:04:45.839326 2514 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:04:45.839552 kubelet[2514]: I0913 00:04:45.839363 2514 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:04:45.839552 kubelet[2514]: E0913 00:04:45.839445 2514 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:04:45.839000 audit[2541]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=2541 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:45.839000 audit[2541]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffa0b1000 a2=0 a3=1 items=0 ppid=2514 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:45.839000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 13 00:04:45.841000 audit[2542]: NETFILTER_CFG table=mangle:34 family=10 entries=1 op=nft_register_chain pid=2542 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:04:45.841000 audit[2542]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc24ac4f0 a2=0 a3=1 items=0 ppid=2514 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:45.841000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 13 00:04:45.842000 audit[2543]: NETFILTER_CFG table=filter:35 family=2 entries=1 op=nft_register_chain pid=2543 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:45.842000 audit[2543]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdfc28e90 a2=0 a3=1 items=0 ppid=2514 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:45.842000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 13 00:04:45.846000 audit[2544]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=2544 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:04:45.846000 audit[2544]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffc5c4f660 a2=0 a3=1 items=0 ppid=2514 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:45.846000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 13 00:04:45.848946 kubelet[2514]: W0913 00:04:45.848860 2514 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.29.1:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.1:6443: connect: connection refused Sep 13 00:04:45.849111 kubelet[2514]: E0913 00:04:45.848959 2514 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.29.1:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.1:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:45.849182 kubelet[2514]: I0913 00:04:45.849144 2514 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:04:45.849182 kubelet[2514]: I0913 00:04:45.849165 2514 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:04:45.849302 kubelet[2514]: I0913 00:04:45.849197 2514 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:04:45.848000 audit[2545]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=2545 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:04:45.848000 audit[2545]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc197a2e0 a2=0 a3=1 items=0 ppid=2514 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:45.848000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 13 00:04:45.850520 kubelet[2514]: E0913 00:04:45.850481 2514 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-1\" not found" Sep 13 00:04:45.854571 kubelet[2514]: I0913 00:04:45.854541 2514 policy_none.go:49] "None policy: Start" Sep 13 00:04:45.856294 kubelet[2514]: I0913 00:04:45.856244 2514 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:04:45.856294 kubelet[2514]: I0913 00:04:45.856295 2514 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:04:45.867260 kubelet[2514]: I0913 00:04:45.867201 2514 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:04:45.866000 audit[2514]: AVC avc: denied { mac_admin } for pid=2514 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:04:45.866000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:04:45.866000 audit[2514]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000a515f0 a1=4000e5b9e0 a2=4000a515c0 a3=25 items=0 ppid=1 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:45.866000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:04:45.867705 kubelet[2514]: I0913 00:04:45.867356 2514 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 13 00:04:45.867705 kubelet[2514]: I0913 00:04:45.867551 2514 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:04:45.867705 kubelet[2514]: I0913 00:04:45.867569 2514 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:04:45.871179 kubelet[2514]: I0913 00:04:45.871126 2514 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:04:45.878274 kubelet[2514]: E0913 00:04:45.878227 2514 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-1\" not found" Sep 13 00:04:45.970357 kubelet[2514]: I0913 00:04:45.970321 2514 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-29-1" Sep 13 00:04:45.971400 kubelet[2514]: E0913 00:04:45.971364 2514 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.29.1:6443/api/v1/nodes\": dial tcp 172.31.29.1:6443: connect: connection refused" node="ip-172-31-29-1" Sep 13 00:04:45.989209 kubelet[2514]: E0913 00:04:45.989152 2514 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-1?timeout=10s\": dial tcp 172.31.29.1:6443: connect: connection refused" interval="400ms" Sep 13 00:04:46.041565 kubelet[2514]: I0913 00:04:46.041518 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4af99453e49a3db4c539cb1f66f19ea5-ca-certs\") pod \"kube-apiserver-ip-172-31-29-1\" (UID: \"4af99453e49a3db4c539cb1f66f19ea5\") " pod="kube-system/kube-apiserver-ip-172-31-29-1" Sep 13 00:04:46.041655 kubelet[2514]: I0913 00:04:46.041574 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4af99453e49a3db4c539cb1f66f19ea5-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-1\" (UID: \"4af99453e49a3db4c539cb1f66f19ea5\") " pod="kube-system/kube-apiserver-ip-172-31-29-1" Sep 13 00:04:46.041655 kubelet[2514]: I0913 00:04:46.041619 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4af99453e49a3db4c539cb1f66f19ea5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-1\" (UID: \"4af99453e49a3db4c539cb1f66f19ea5\") " pod="kube-system/kube-apiserver-ip-172-31-29-1" Sep 13 00:04:46.144408 kubelet[2514]: I0913 00:04:46.142510 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d85fec72667ace44ca085fd740ef889b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-1\" (UID: \"d85fec72667ace44ca085fd740ef889b\") " pod="kube-system/kube-controller-manager-ip-172-31-29-1" Sep 13 00:04:46.144408 kubelet[2514]: I0913 00:04:46.143944 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d85fec72667ace44ca085fd740ef889b-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-1\" (UID: \"d85fec72667ace44ca085fd740ef889b\") " pod="kube-system/kube-controller-manager-ip-172-31-29-1" Sep 13 00:04:46.144408 kubelet[2514]: I0913 00:04:46.144021 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d85fec72667ace44ca085fd740ef889b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-1\" (UID: \"d85fec72667ace44ca085fd740ef889b\") " pod="kube-system/kube-controller-manager-ip-172-31-29-1" Sep 13 00:04:46.144408 kubelet[2514]: I0913 00:04:46.144129 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d85fec72667ace44ca085fd740ef889b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-1\" (UID: \"d85fec72667ace44ca085fd740ef889b\") " pod="kube-system/kube-controller-manager-ip-172-31-29-1" Sep 13 00:04:46.144408 kubelet[2514]: I0913 00:04:46.144213 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d85fec72667ace44ca085fd740ef889b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-1\" (UID: \"d85fec72667ace44ca085fd740ef889b\") " pod="kube-system/kube-controller-manager-ip-172-31-29-1" Sep 13 00:04:46.144815 kubelet[2514]: I0913 00:04:46.144297 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/93f4e76ffe76c928bbff73faf5ff691f-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-1\" (UID: \"93f4e76ffe76c928bbff73faf5ff691f\") " pod="kube-system/kube-scheduler-ip-172-31-29-1" Sep 13 00:04:46.174568 kubelet[2514]: I0913 00:04:46.174492 2514 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-29-1" Sep 13 00:04:46.175406 kubelet[2514]: E0913 00:04:46.175358 2514 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.29.1:6443/api/v1/nodes\": dial tcp 172.31.29.1:6443: connect: connection refused" node="ip-172-31-29-1" Sep 13 00:04:46.254572 env[1920]: time="2025-09-13T00:04:46.254504181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-1,Uid:d85fec72667ace44ca085fd740ef889b,Namespace:kube-system,Attempt:0,}" Sep 13 00:04:46.262875 env[1920]: time="2025-09-13T00:04:46.262377244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-1,Uid:4af99453e49a3db4c539cb1f66f19ea5,Namespace:kube-system,Attempt:0,}" Sep 13 00:04:46.264688 env[1920]: time="2025-09-13T00:04:46.264279751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-1,Uid:93f4e76ffe76c928bbff73faf5ff691f,Namespace:kube-system,Attempt:0,}" Sep 13 00:04:46.390832 kubelet[2514]: E0913 00:04:46.390711 2514 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-1?timeout=10s\": dial tcp 172.31.29.1:6443: connect: connection refused" interval="800ms" Sep 13 00:04:46.577842 kubelet[2514]: I0913 00:04:46.577593 2514 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-29-1" Sep 13 00:04:46.578208 kubelet[2514]: E0913 00:04:46.578165 2514 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.29.1:6443/api/v1/nodes\": dial tcp 172.31.29.1:6443: connect: connection refused" node="ip-172-31-29-1" Sep 13 00:04:46.764298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3420603018.mount: Deactivated successfully. Sep 13 00:04:46.783219 env[1920]: time="2025-09-13T00:04:46.783141355Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:46.787288 env[1920]: time="2025-09-13T00:04:46.787222099Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:46.789508 env[1920]: time="2025-09-13T00:04:46.789446310Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:46.791608 env[1920]: time="2025-09-13T00:04:46.791547169Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:46.796998 env[1920]: time="2025-09-13T00:04:46.796943021Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:46.801162 env[1920]: time="2025-09-13T00:04:46.801113300Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:46.803214 env[1920]: time="2025-09-13T00:04:46.803169074Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:46.804832 env[1920]: time="2025-09-13T00:04:46.804759047Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:46.806422 env[1920]: time="2025-09-13T00:04:46.806362077Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:46.812968 env[1920]: time="2025-09-13T00:04:46.812893884Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:46.816302 env[1920]: time="2025-09-13T00:04:46.816234441Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:46.822868 env[1920]: time="2025-09-13T00:04:46.822816086Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:04:46.855432 kubelet[2514]: W0913 00:04:46.855195 2514 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.29.1:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.29.1:6443: connect: connection refused Sep 13 00:04:46.855432 kubelet[2514]: E0913 00:04:46.855286 2514 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.29.1:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.1:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:46.905837 env[1920]: time="2025-09-13T00:04:46.905704145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:04:46.906110 env[1920]: time="2025-09-13T00:04:46.906044436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:04:46.906338 env[1920]: time="2025-09-13T00:04:46.906276305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:46.908519 env[1920]: time="2025-09-13T00:04:46.907712603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:04:46.908519 env[1920]: time="2025-09-13T00:04:46.908176401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:04:46.908519 env[1920]: time="2025-09-13T00:04:46.908219853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:46.908898 env[1920]: time="2025-09-13T00:04:46.907591304Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/322fda4b3f8c7355d318695cfce14d8cf5cb2e3ee272b63e6a672490468720ea pid=2563 runtime=io.containerd.runc.v2 Sep 13 00:04:46.909196 env[1920]: time="2025-09-13T00:04:46.909069771Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/76c147c702f1808b26dd361808d0ec92ac270dd5d1d4486d11032de84cfca04b pid=2564 runtime=io.containerd.runc.v2 Sep 13 00:04:46.916248 env[1920]: time="2025-09-13T00:04:46.916116689Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:04:46.916444 env[1920]: time="2025-09-13T00:04:46.916194079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:04:46.916444 env[1920]: time="2025-09-13T00:04:46.916252772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:46.921048 env[1920]: time="2025-09-13T00:04:46.920936829Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e60d159f642436a695b3af8bc512ddca94cfa361dc6929aee00fca3f118e0752 pid=2582 runtime=io.containerd.runc.v2 Sep 13 00:04:47.056833 kubelet[2514]: W0913 00:04:47.055453 2514 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.29.1:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-1&limit=500&resourceVersion=0": dial tcp 172.31.29.1:6443: connect: connection refused Sep 13 00:04:47.056833 kubelet[2514]: E0913 00:04:47.055570 2514 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.29.1:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-1&limit=500&resourceVersion=0\": dial tcp 172.31.29.1:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:47.067703 env[1920]: time="2025-09-13T00:04:47.067633846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-1,Uid:4af99453e49a3db4c539cb1f66f19ea5,Namespace:kube-system,Attempt:0,} returns sandbox id \"76c147c702f1808b26dd361808d0ec92ac270dd5d1d4486d11032de84cfca04b\"" Sep 13 00:04:47.077404 env[1920]: time="2025-09-13T00:04:47.077346739Z" level=info msg="CreateContainer within sandbox \"76c147c702f1808b26dd361808d0ec92ac270dd5d1d4486d11032de84cfca04b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:04:47.123875 env[1920]: time="2025-09-13T00:04:47.122435240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-1,Uid:93f4e76ffe76c928bbff73faf5ff691f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e60d159f642436a695b3af8bc512ddca94cfa361dc6929aee00fca3f118e0752\"" Sep 13 00:04:47.127562 env[1920]: time="2025-09-13T00:04:47.127501042Z" level=info msg="CreateContainer within sandbox \"76c147c702f1808b26dd361808d0ec92ac270dd5d1d4486d11032de84cfca04b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6872756bd52e7794d2feb7c50279e2ee1d759325cf9d3e19b90b3222b22b9338\"" Sep 13 00:04:47.128606 env[1920]: time="2025-09-13T00:04:47.128556907Z" level=info msg="StartContainer for \"6872756bd52e7794d2feb7c50279e2ee1d759325cf9d3e19b90b3222b22b9338\"" Sep 13 00:04:47.132335 env[1920]: time="2025-09-13T00:04:47.132278815Z" level=info msg="CreateContainer within sandbox \"e60d159f642436a695b3af8bc512ddca94cfa361dc6929aee00fca3f118e0752\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:04:47.141816 env[1920]: time="2025-09-13T00:04:47.141737015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-1,Uid:d85fec72667ace44ca085fd740ef889b,Namespace:kube-system,Attempt:0,} returns sandbox id \"322fda4b3f8c7355d318695cfce14d8cf5cb2e3ee272b63e6a672490468720ea\"" Sep 13 00:04:47.147117 env[1920]: time="2025-09-13T00:04:47.146814254Z" level=info msg="CreateContainer within sandbox \"322fda4b3f8c7355d318695cfce14d8cf5cb2e3ee272b63e6a672490468720ea\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:04:47.180980 env[1920]: time="2025-09-13T00:04:47.180913541Z" level=info msg="CreateContainer within sandbox \"e60d159f642436a695b3af8bc512ddca94cfa361dc6929aee00fca3f118e0752\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fb255b19651f3fd8ef9c39b58e77371ce84c2ab4977f123c1583eca706359088\"" Sep 13 00:04:47.182246 env[1920]: time="2025-09-13T00:04:47.182190330Z" level=info msg="StartContainer for \"fb255b19651f3fd8ef9c39b58e77371ce84c2ab4977f123c1583eca706359088\"" Sep 13 00:04:47.190830 kubelet[2514]: W0913 00:04:47.189661 2514 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.29.1:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.1:6443: connect: connection refused Sep 13 00:04:47.190830 kubelet[2514]: E0913 00:04:47.189817 2514 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.29.1:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.1:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:47.190830 kubelet[2514]: W0913 00:04:47.190669 2514 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.29.1:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.1:6443: connect: connection refused Sep 13 00:04:47.190830 kubelet[2514]: E0913 00:04:47.190752 2514 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.29.1:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.1:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:04:47.197742 kubelet[2514]: E0913 00:04:47.197674 2514 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-1?timeout=10s\": dial tcp 172.31.29.1:6443: connect: connection refused" interval="1.6s" Sep 13 00:04:47.206519 env[1920]: time="2025-09-13T00:04:47.206433621Z" level=info msg="CreateContainer within sandbox \"322fda4b3f8c7355d318695cfce14d8cf5cb2e3ee272b63e6a672490468720ea\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fead1a70d39e98d6f30c240ba7189c1c188f9ffc5805441d39346e8af2867e22\"" Sep 13 00:04:47.207488 env[1920]: time="2025-09-13T00:04:47.207440021Z" level=info msg="StartContainer for \"fead1a70d39e98d6f30c240ba7189c1c188f9ffc5805441d39346e8af2867e22\"" Sep 13 00:04:47.329023 env[1920]: time="2025-09-13T00:04:47.328919499Z" level=info msg="StartContainer for \"6872756bd52e7794d2feb7c50279e2ee1d759325cf9d3e19b90b3222b22b9338\" returns successfully" Sep 13 00:04:47.386873 kubelet[2514]: I0913 00:04:47.384883 2514 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-29-1" Sep 13 00:04:47.386873 kubelet[2514]: E0913 00:04:47.385654 2514 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.29.1:6443/api/v1/nodes\": dial tcp 172.31.29.1:6443: connect: connection refused" node="ip-172-31-29-1" Sep 13 00:04:47.389409 env[1920]: time="2025-09-13T00:04:47.389047353Z" level=info msg="StartContainer for \"fb255b19651f3fd8ef9c39b58e77371ce84c2ab4977f123c1583eca706359088\" returns successfully" Sep 13 00:04:47.447431 env[1920]: time="2025-09-13T00:04:47.447362427Z" level=info msg="StartContainer for \"fead1a70d39e98d6f30c240ba7189c1c188f9ffc5805441d39346e8af2867e22\" returns successfully" Sep 13 00:04:48.999732 kubelet[2514]: I0913 00:04:48.999682 2514 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-29-1" Sep 13 00:04:50.326715 kubelet[2514]: E0913 00:04:50.326637 2514 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-29-1\" not found" node="ip-172-31-29-1" Sep 13 00:04:50.437560 kubelet[2514]: I0913 00:04:50.437487 2514 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-29-1" Sep 13 00:04:50.712065 kubelet[2514]: I0913 00:04:50.711942 2514 apiserver.go:52] "Watching apiserver" Sep 13 00:04:50.741561 kubelet[2514]: I0913 00:04:50.741470 2514 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:04:52.833976 systemd[1]: Reloading. Sep 13 00:04:52.954294 update_engine[1912]: I0913 00:04:52.953670 1912 update_attempter.cc:509] Updating boot flags... Sep 13 00:04:52.986043 /usr/lib/systemd/system-generators/torcx-generator[2804]: time="2025-09-13T00:04:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:04:52.988113 /usr/lib/systemd/system-generators/torcx-generator[2804]: time="2025-09-13T00:04:52Z" level=info msg="torcx already run" Sep 13 00:04:53.411491 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:04:53.411532 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:04:53.485515 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:04:53.844221 systemd[1]: Stopping kubelet.service... Sep 13 00:04:53.864676 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:04:53.865324 systemd[1]: Stopped kubelet.service. Sep 13 00:04:53.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:53.867373 kernel: kauditd_printk_skb: 40 callbacks suppressed Sep 13 00:04:53.867440 kernel: audit: type=1131 audit(1757721893.864:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:53.882855 systemd[1]: Starting kubelet.service... Sep 13 00:04:54.249673 systemd[1]: Started kubelet.service. Sep 13 00:04:54.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:54.271824 kernel: audit: type=1130 audit(1757721894.249:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:04:54.396163 kubelet[3053]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:04:54.396163 kubelet[3053]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:04:54.396163 kubelet[3053]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:04:54.396843 kubelet[3053]: I0913 00:04:54.396279 3053 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:04:54.409449 kubelet[3053]: I0913 00:04:54.409253 3053 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:04:54.409449 kubelet[3053]: I0913 00:04:54.409438 3053 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:04:54.410051 kubelet[3053]: I0913 00:04:54.410012 3053 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:04:54.415658 kubelet[3053]: I0913 00:04:54.415560 3053 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:04:54.422083 kubelet[3053]: I0913 00:04:54.421989 3053 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:04:54.429971 kubelet[3053]: E0913 00:04:54.429909 3053 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:04:54.430142 kubelet[3053]: I0913 00:04:54.429966 3053 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:04:54.435012 kubelet[3053]: I0913 00:04:54.434962 3053 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:04:54.437010 kubelet[3053]: I0913 00:04:54.435689 3053 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:04:54.437010 kubelet[3053]: I0913 00:04:54.436199 3053 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:04:54.438298 kubelet[3053]: I0913 00:04:54.436253 3053 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:04:54.438641 kubelet[3053]: I0913 00:04:54.438317 3053 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:04:54.438641 kubelet[3053]: I0913 00:04:54.438343 3053 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:04:54.438641 kubelet[3053]: I0913 00:04:54.438413 3053 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:04:54.438641 kubelet[3053]: I0913 00:04:54.438634 3053 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:04:54.439191 kubelet[3053]: I0913 00:04:54.438660 3053 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:04:54.439191 kubelet[3053]: I0913 00:04:54.438694 3053 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:04:54.439191 kubelet[3053]: I0913 00:04:54.438722 3053 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:04:54.444000 audit[3053]: AVC avc: denied { mac_admin } for pid=3053 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:04:54.448659 kubelet[3053]: I0913 00:04:54.441017 3053 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:04:54.448659 kubelet[3053]: I0913 00:04:54.441856 3053 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:04:54.448659 kubelet[3053]: I0913 00:04:54.442517 3053 server.go:1274] "Started kubelet" Sep 13 00:04:54.453340 kubelet[3053]: I0913 00:04:54.453268 3053 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 13 00:04:54.453514 kubelet[3053]: I0913 00:04:54.453376 3053 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 13 00:04:54.453514 kubelet[3053]: I0913 00:04:54.453425 3053 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:04:54.444000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:04:54.458879 kernel: audit: type=1400 audit(1757721894.444:224): avc: denied { mac_admin } for pid=3053 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:04:54.459080 kernel: audit: type=1401 audit(1757721894.444:224): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:04:54.459314 kubelet[3053]: I0913 00:04:54.459251 3053 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:04:54.444000 audit[3053]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000aa09c0 a1=400099e9a8 a2=4000aa0990 a3=25 items=0 ppid=1 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:54.471434 kernel: audit: type=1300 audit(1757721894.444:224): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000aa09c0 a1=400099e9a8 a2=4000aa0990 a3=25 items=0 ppid=1 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:54.444000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:04:54.484891 kernel: audit: type=1327 audit(1757721894.444:224): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:04:54.489489 kubelet[3053]: I0913 00:04:54.485342 3053 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:04:54.489489 kubelet[3053]: I0913 00:04:54.485941 3053 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:04:54.489489 kubelet[3053]: I0913 00:04:54.486376 3053 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:04:54.452000 audit[3053]: AVC avc: denied { mac_admin } for pid=3053 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:04:54.499419 kernel: audit: type=1400 audit(1757721894.452:225): avc: denied { mac_admin } for pid=3053 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:04:54.452000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:04:54.505908 kernel: audit: type=1401 audit(1757721894.452:225): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:04:54.506033 kubelet[3053]: I0913 00:04:54.504117 3053 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:04:54.506033 kubelet[3053]: E0913 00:04:54.504533 3053 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-1\" not found" Sep 13 00:04:54.506033 kubelet[3053]: I0913 00:04:54.505519 3053 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:04:54.506033 kubelet[3053]: I0913 00:04:54.505852 3053 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:04:54.452000 audit[3053]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000652ec0 a1=400099e9c0 a2=4000aa0a50 a3=25 items=0 ppid=1 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:54.518238 kernel: audit: type=1300 audit(1757721894.452:225): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000652ec0 a1=400099e9c0 a2=4000aa0a50 a3=25 items=0 ppid=1 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:54.519129 kubelet[3053]: I0913 00:04:54.519097 3053 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:04:54.534101 kernel: audit: type=1327 audit(1757721894.452:225): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:04:54.452000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:04:54.552814 kubelet[3053]: I0913 00:04:54.552718 3053 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:04:54.553281 kubelet[3053]: I0913 00:04:54.553238 3053 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:04:54.571251 kubelet[3053]: I0913 00:04:54.571218 3053 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:04:54.606873 kubelet[3053]: E0913 00:04:54.606824 3053 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:04:54.607163 kubelet[3053]: I0913 00:04:54.607130 3053 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:04:54.609442 kubelet[3053]: I0913 00:04:54.609402 3053 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:04:54.609666 kubelet[3053]: I0913 00:04:54.609644 3053 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:04:54.609829 kubelet[3053]: I0913 00:04:54.609809 3053 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:04:54.610050 kubelet[3053]: E0913 00:04:54.609984 3053 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:04:54.627550 kubelet[3053]: E0913 00:04:54.627483 3053 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-1\" not found" Sep 13 00:04:54.710905 kubelet[3053]: E0913 00:04:54.710863 3053 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:04:54.769986 kubelet[3053]: I0913 00:04:54.769726 3053 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:04:54.770187 kubelet[3053]: I0913 00:04:54.770158 3053 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:04:54.770346 kubelet[3053]: I0913 00:04:54.770325 3053 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:04:54.770698 kubelet[3053]: I0913 00:04:54.770675 3053 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:04:54.770845 kubelet[3053]: I0913 00:04:54.770804 3053 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:04:54.770951 kubelet[3053]: I0913 00:04:54.770931 3053 policy_none.go:49] "None policy: Start" Sep 13 00:04:54.772432 kubelet[3053]: I0913 00:04:54.772394 3053 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:04:54.772565 kubelet[3053]: I0913 00:04:54.772445 3053 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:04:54.772728 kubelet[3053]: I0913 00:04:54.772703 3053 state_mem.go:75] "Updated machine memory state" Sep 13 00:04:54.777193 kubelet[3053]: I0913 00:04:54.777144 3053 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:04:54.776000 audit[3053]: AVC avc: denied { mac_admin } for pid=3053 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:04:54.776000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:04:54.776000 audit[3053]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000b462a0 a1=400086c888 a2=4000b46270 a3=25 items=0 ppid=1 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:54.776000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:04:54.777827 kubelet[3053]: I0913 00:04:54.777305 3053 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 13 00:04:54.779162 kubelet[3053]: I0913 00:04:54.779118 3053 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:04:54.779274 kubelet[3053]: I0913 00:04:54.779154 3053 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:04:54.780710 kubelet[3053]: I0913 00:04:54.780682 3053 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:04:54.924227 kubelet[3053]: I0913 00:04:54.919370 3053 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-29-1" Sep 13 00:04:54.938815 kubelet[3053]: E0913 00:04:54.938747 3053 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-29-1\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-29-1" Sep 13 00:04:54.944954 kubelet[3053]: I0913 00:04:54.944915 3053 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-29-1" Sep 13 00:04:54.945251 kubelet[3053]: I0913 00:04:54.945232 3053 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-29-1" Sep 13 00:04:55.012252 kubelet[3053]: I0913 00:04:55.012195 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d85fec72667ace44ca085fd740ef889b-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-1\" (UID: \"d85fec72667ace44ca085fd740ef889b\") " pod="kube-system/kube-controller-manager-ip-172-31-29-1" Sep 13 00:04:55.012556 kubelet[3053]: I0913 00:04:55.012527 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d85fec72667ace44ca085fd740ef889b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-1\" (UID: \"d85fec72667ace44ca085fd740ef889b\") " pod="kube-system/kube-controller-manager-ip-172-31-29-1" Sep 13 00:04:55.012851 kubelet[3053]: I0913 00:04:55.012723 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d85fec72667ace44ca085fd740ef889b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-1\" (UID: \"d85fec72667ace44ca085fd740ef889b\") " pod="kube-system/kube-controller-manager-ip-172-31-29-1" Sep 13 00:04:55.013067 kubelet[3053]: I0913 00:04:55.013028 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d85fec72667ace44ca085fd740ef889b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-1\" (UID: \"d85fec72667ace44ca085fd740ef889b\") " pod="kube-system/kube-controller-manager-ip-172-31-29-1" Sep 13 00:04:55.013264 kubelet[3053]: I0913 00:04:55.013213 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/93f4e76ffe76c928bbff73faf5ff691f-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-1\" (UID: \"93f4e76ffe76c928bbff73faf5ff691f\") " pod="kube-system/kube-scheduler-ip-172-31-29-1" Sep 13 00:04:55.013514 kubelet[3053]: I0913 00:04:55.013487 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4af99453e49a3db4c539cb1f66f19ea5-ca-certs\") pod \"kube-apiserver-ip-172-31-29-1\" (UID: \"4af99453e49a3db4c539cb1f66f19ea5\") " pod="kube-system/kube-apiserver-ip-172-31-29-1" Sep 13 00:04:55.013739 kubelet[3053]: I0913 00:04:55.013690 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4af99453e49a3db4c539cb1f66f19ea5-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-1\" (UID: \"4af99453e49a3db4c539cb1f66f19ea5\") " pod="kube-system/kube-apiserver-ip-172-31-29-1" Sep 13 00:04:55.013934 kubelet[3053]: I0913 00:04:55.013908 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4af99453e49a3db4c539cb1f66f19ea5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-1\" (UID: \"4af99453e49a3db4c539cb1f66f19ea5\") " pod="kube-system/kube-apiserver-ip-172-31-29-1" Sep 13 00:04:55.014175 kubelet[3053]: I0913 00:04:55.014150 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d85fec72667ace44ca085fd740ef889b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-1\" (UID: \"d85fec72667ace44ca085fd740ef889b\") " pod="kube-system/kube-controller-manager-ip-172-31-29-1" Sep 13 00:04:55.477257 kubelet[3053]: I0913 00:04:55.477197 3053 apiserver.go:52] "Watching apiserver" Sep 13 00:04:55.505951 kubelet[3053]: I0913 00:04:55.505893 3053 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:04:55.689163 kubelet[3053]: E0913 00:04:55.689046 3053 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-29-1\" already exists" pod="kube-system/kube-apiserver-ip-172-31-29-1" Sep 13 00:04:55.788804 kubelet[3053]: I0913 00:04:55.788669 3053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-1" podStartSLOduration=2.788645159 podStartE2EDuration="2.788645159s" podCreationTimestamp="2025-09-13 00:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:04:55.756584159 +0000 UTC m=+1.473884352" watchObservedRunningTime="2025-09-13 00:04:55.788645159 +0000 UTC m=+1.505945340" Sep 13 00:04:55.824719 kubelet[3053]: I0913 00:04:55.824647 3053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-1" podStartSLOduration=1.8246244090000001 podStartE2EDuration="1.824624409s" podCreationTimestamp="2025-09-13 00:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:04:55.789550414 +0000 UTC m=+1.506850607" watchObservedRunningTime="2025-09-13 00:04:55.824624409 +0000 UTC m=+1.541924590" Sep 13 00:04:55.853685 kubelet[3053]: I0913 00:04:55.853613 3053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-1" podStartSLOduration=1.853593093 podStartE2EDuration="1.853593093s" podCreationTimestamp="2025-09-13 00:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:04:55.829300359 +0000 UTC m=+1.546600564" watchObservedRunningTime="2025-09-13 00:04:55.853593093 +0000 UTC m=+1.570893274" Sep 13 00:04:57.883091 kubelet[3053]: I0913 00:04:57.883046 3053 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:04:57.884545 env[1920]: time="2025-09-13T00:04:57.884493163Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:04:57.885810 kubelet[3053]: I0913 00:04:57.885655 3053 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:04:58.541976 kubelet[3053]: I0913 00:04:58.541923 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/17c4430b-a2c5-4efd-bae0-a4322eca149c-kube-proxy\") pod \"kube-proxy-7jzb8\" (UID: \"17c4430b-a2c5-4efd-bae0-a4322eca149c\") " pod="kube-system/kube-proxy-7jzb8" Sep 13 00:04:58.542284 kubelet[3053]: I0913 00:04:58.542237 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17c4430b-a2c5-4efd-bae0-a4322eca149c-xtables-lock\") pod \"kube-proxy-7jzb8\" (UID: \"17c4430b-a2c5-4efd-bae0-a4322eca149c\") " pod="kube-system/kube-proxy-7jzb8" Sep 13 00:04:58.542490 kubelet[3053]: I0913 00:04:58.542437 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17c4430b-a2c5-4efd-bae0-a4322eca149c-lib-modules\") pod \"kube-proxy-7jzb8\" (UID: \"17c4430b-a2c5-4efd-bae0-a4322eca149c\") " pod="kube-system/kube-proxy-7jzb8" Sep 13 00:04:58.542695 kubelet[3053]: I0913 00:04:58.542642 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wpdr\" (UniqueName: \"kubernetes.io/projected/17c4430b-a2c5-4efd-bae0-a4322eca149c-kube-api-access-2wpdr\") pod \"kube-proxy-7jzb8\" (UID: \"17c4430b-a2c5-4efd-bae0-a4322eca149c\") " pod="kube-system/kube-proxy-7jzb8" Sep 13 00:04:58.656950 kubelet[3053]: I0913 00:04:58.656902 3053 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 00:04:58.775422 env[1920]: time="2025-09-13T00:04:58.774874094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7jzb8,Uid:17c4430b-a2c5-4efd-bae0-a4322eca149c,Namespace:kube-system,Attempt:0,}" Sep 13 00:04:58.809359 env[1920]: time="2025-09-13T00:04:58.808349062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:04:58.809359 env[1920]: time="2025-09-13T00:04:58.808455191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:04:58.809359 env[1920]: time="2025-09-13T00:04:58.808482203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:04:58.809875 env[1920]: time="2025-09-13T00:04:58.809753267Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f07ce3053114ce7f3af38b838fe4824b7f7dc86a5f45c609e81f9c5a9a13daf pid=3105 runtime=io.containerd.runc.v2 Sep 13 00:04:58.866266 systemd[1]: run-containerd-runc-k8s.io-7f07ce3053114ce7f3af38b838fe4824b7f7dc86a5f45c609e81f9c5a9a13daf-runc.fOy6oe.mount: Deactivated successfully. Sep 13 00:04:58.935495 kubelet[3053]: W0913 00:04:58.935431 3053 reflector.go:561] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ip-172-31-29-1" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-29-1' and this object Sep 13 00:04:58.936350 kubelet[3053]: E0913 00:04:58.936282 3053 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:ip-172-31-29-1\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ip-172-31-29-1' and this object" logger="UnhandledError" Sep 13 00:04:58.937062 kubelet[3053]: W0913 00:04:58.937020 3053 reflector.go:561] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-29-1" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-29-1' and this object Sep 13 00:04:58.937327 kubelet[3053]: E0913 00:04:58.937273 3053 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-29-1\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ip-172-31-29-1' and this object" logger="UnhandledError" Sep 13 00:04:58.945912 kubelet[3053]: I0913 00:04:58.945858 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4c989a6c-20bd-412d-b06b-e5f57575cdca-var-lib-calico\") pod \"tigera-operator-58fc44c59b-tpzfq\" (UID: \"4c989a6c-20bd-412d-b06b-e5f57575cdca\") " pod="tigera-operator/tigera-operator-58fc44c59b-tpzfq" Sep 13 00:04:58.946213 kubelet[3053]: I0913 00:04:58.946168 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9npxp\" (UniqueName: \"kubernetes.io/projected/4c989a6c-20bd-412d-b06b-e5f57575cdca-kube-api-access-9npxp\") pod \"tigera-operator-58fc44c59b-tpzfq\" (UID: \"4c989a6c-20bd-412d-b06b-e5f57575cdca\") " pod="tigera-operator/tigera-operator-58fc44c59b-tpzfq" Sep 13 00:04:59.005446 env[1920]: time="2025-09-13T00:04:59.005391208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7jzb8,Uid:17c4430b-a2c5-4efd-bae0-a4322eca149c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f07ce3053114ce7f3af38b838fe4824b7f7dc86a5f45c609e81f9c5a9a13daf\"" Sep 13 00:04:59.013017 env[1920]: time="2025-09-13T00:04:59.012953688Z" level=info msg="CreateContainer within sandbox \"7f07ce3053114ce7f3af38b838fe4824b7f7dc86a5f45c609e81f9c5a9a13daf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:04:59.041919 env[1920]: time="2025-09-13T00:04:59.041834863Z" level=info msg="CreateContainer within sandbox \"7f07ce3053114ce7f3af38b838fe4824b7f7dc86a5f45c609e81f9c5a9a13daf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"869b40afb4e96c3e84643034032569c342d5ddf1b52dc0f1d2b48a93a7229a21\"" Sep 13 00:04:59.045027 env[1920]: time="2025-09-13T00:04:59.044156967Z" level=info msg="StartContainer for \"869b40afb4e96c3e84643034032569c342d5ddf1b52dc0f1d2b48a93a7229a21\"" Sep 13 00:04:59.160320 env[1920]: time="2025-09-13T00:04:59.157468788Z" level=info msg="StartContainer for \"869b40afb4e96c3e84643034032569c342d5ddf1b52dc0f1d2b48a93a7229a21\" returns successfully" Sep 13 00:04:59.390000 audit[3207]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=3207 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:59.393286 kernel: kauditd_printk_skb: 4 callbacks suppressed Sep 13 00:04:59.393355 kernel: audit: type=1325 audit(1757721899.390:227): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=3207 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:59.390000 audit[3207]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe8945c10 a2=0 a3=1 items=0 ppid=3158 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.410167 kernel: audit: type=1300 audit(1757721899.390:227): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe8945c10 a2=0 a3=1 items=0 ppid=3158 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.390000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:04:59.422625 kernel: audit: type=1327 audit(1757721899.390:227): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:04:59.392000 audit[3208]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=3208 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:04:59.429238 kernel: audit: type=1325 audit(1757721899.392:228): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=3208 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:04:59.392000 audit[3208]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffed69a210 a2=0 a3=1 items=0 ppid=3158 pid=3208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.440688 kernel: audit: type=1300 audit(1757721899.392:228): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffed69a210 a2=0 a3=1 items=0 ppid=3158 pid=3208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.440839 kernel: audit: type=1327 audit(1757721899.392:228): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:04:59.392000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:04:59.408000 audit[3209]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=3209 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:59.451909 kernel: audit: type=1325 audit(1757721899.408:229): table=nat:40 family=2 entries=1 op=nft_register_chain pid=3209 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:59.408000 audit[3209]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffce1ef2b0 a2=0 a3=1 items=0 ppid=3158 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.408000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 13 00:04:59.470958 kernel: audit: type=1300 audit(1757721899.408:229): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffce1ef2b0 a2=0 a3=1 items=0 ppid=3158 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.471065 kernel: audit: type=1327 audit(1757721899.408:229): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 13 00:04:59.471109 kernel: audit: type=1325 audit(1757721899.414:230): table=nat:41 family=10 entries=1 op=nft_register_chain pid=3210 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:04:59.414000 audit[3210]: NETFILTER_CFG table=nat:41 family=10 entries=1 op=nft_register_chain pid=3210 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:04:59.414000 audit[3210]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffecd866c0 a2=0 a3=1 items=0 ppid=3158 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.414000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 13 00:04:59.416000 audit[3211]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_chain pid=3211 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:59.416000 audit[3211]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd68d3470 a2=0 a3=1 items=0 ppid=3158 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.416000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 13 00:04:59.416000 audit[3212]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=3212 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:04:59.416000 audit[3212]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcbd41050 a2=0 a3=1 items=0 ppid=3158 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.416000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 13 00:04:59.503000 audit[3213]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=3213 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:59.503000 audit[3213]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffda34e700 a2=0 a3=1 items=0 ppid=3158 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.503000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 13 00:04:59.508000 audit[3215]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=3215 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:59.508000 audit[3215]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffdfc3be00 a2=0 a3=1 items=0 ppid=3158 pid=3215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.508000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Sep 13 00:04:59.515000 audit[3218]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=3218 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:59.515000 audit[3218]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc3548070 a2=0 a3=1 items=0 ppid=3158 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.515000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Sep 13 00:04:59.517000 audit[3219]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=3219 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:59.517000 audit[3219]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc8871c50 a2=0 a3=1 items=0 ppid=3158 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.517000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 13 00:04:59.523000 audit[3221]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=3221 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:59.523000 audit[3221]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe960e0e0 a2=0 a3=1 items=0 ppid=3158 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.523000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 13 00:04:59.525000 audit[3222]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=3222 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:59.525000 audit[3222]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffce3465f0 a2=0 a3=1 items=0 ppid=3158 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.525000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 13 00:04:59.530000 audit[3224]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=3224 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:59.530000 audit[3224]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffdae517b0 a2=0 a3=1 items=0 ppid=3158 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.530000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 13 00:04:59.538000 audit[3227]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=3227 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:59.538000 audit[3227]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd402dcd0 a2=0 a3=1 items=0 ppid=3158 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.538000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Sep 13 00:04:59.540000 audit[3228]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=3228 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:59.540000 audit[3228]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff16166f0 a2=0 a3=1 items=0 ppid=3158 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.540000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 13 00:04:59.546000 audit[3230]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=3230 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:59.546000 audit[3230]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcbfe1610 a2=0 a3=1 items=0 ppid=3158 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.546000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 13 00:04:59.548000 audit[3231]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:59.548000 audit[3231]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc45b46a0 a2=0 a3=1 items=0 ppid=3158 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.548000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 13 00:04:59.553000 audit[3233]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:59.553000 audit[3233]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffef7c1df0 a2=0 a3=1 items=0 ppid=3158 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.553000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 13 00:04:59.563000 audit[3236]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=3236 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:59.563000 audit[3236]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe6367510 a2=0 a3=1 items=0 ppid=3158 pid=3236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.563000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 13 00:04:59.571000 audit[3239]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=3239 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:59.571000 audit[3239]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe7dc1140 a2=0 a3=1 items=0 ppid=3158 pid=3239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.571000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 13 00:04:59.573000 audit[3240]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3240 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:59.573000 audit[3240]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffcc6603a0 a2=0 a3=1 items=0 ppid=3158 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.573000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 13 00:04:59.578000 audit[3242]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=3242 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:59.578000 audit[3242]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff9e29170 a2=0 a3=1 items=0 ppid=3158 pid=3242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.578000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:04:59.586000 audit[3245]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=3245 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:59.586000 audit[3245]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffca4ca7e0 a2=0 a3=1 items=0 ppid=3158 pid=3245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.586000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:04:59.588000 audit[3246]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=3246 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:59.588000 audit[3246]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffeb0dd70 a2=0 a3=1 items=0 ppid=3158 pid=3246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.588000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 13 00:04:59.593000 audit[3248]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=3248 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:04:59.593000 audit[3248]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffdd97ace0 a2=0 a3=1 items=0 ppid=3158 pid=3248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.593000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 13 00:04:59.640000 audit[3254]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=3254 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:04:59.640000 audit[3254]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc197f100 a2=0 a3=1 items=0 ppid=3158 pid=3254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.640000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:04:59.654000 audit[3254]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=3254 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:04:59.654000 audit[3254]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffc197f100 a2=0 a3=1 items=0 ppid=3158 pid=3254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.654000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:04:59.657000 audit[3259]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=3259 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:04:59.657000 audit[3259]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff2f6e900 a2=0 a3=1 items=0 ppid=3158 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.657000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 13 00:04:59.670000 audit[3261]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=3261 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:04:59.670000 audit[3261]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffff5179c10 a2=0 a3=1 items=0 ppid=3158 pid=3261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.670000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Sep 13 00:04:59.685000 audit[3264]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=3264 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:04:59.685000 audit[3264]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc5646f00 a2=0 a3=1 items=0 ppid=3158 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.685000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Sep 13 00:04:59.691000 audit[3265]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=3265 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:04:59.691000 audit[3265]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffff45dc0 a2=0 a3=1 items=0 ppid=3158 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.691000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 13 00:04:59.698573 kubelet[3053]: I0913 00:04:59.698486 3053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7jzb8" podStartSLOduration=1.698462943 podStartE2EDuration="1.698462943s" podCreationTimestamp="2025-09-13 00:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:04:59.69682698 +0000 UTC m=+5.414127161" watchObservedRunningTime="2025-09-13 00:04:59.698462943 +0000 UTC m=+5.415763100" Sep 13 00:04:59.698000 audit[3267]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=3267 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:04:59.698000 audit[3267]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe66e5250 a2=0 a3=1 items=0 ppid=3158 pid=3267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.698000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 13 00:04:59.703000 audit[3268]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=3268 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:04:59.703000 audit[3268]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcc6042b0 a2=0 a3=1 items=0 ppid=3158 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.703000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 13 00:04:59.710000 audit[3270]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=3270 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:04:59.710000 audit[3270]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffffd8a4160 a2=0 a3=1 items=0 ppid=3158 pid=3270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.710000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Sep 13 00:04:59.720000 audit[3273]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=3273 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:04:59.720000 audit[3273]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffdbd85a00 a2=0 a3=1 items=0 ppid=3158 pid=3273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.720000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 13 00:04:59.723000 audit[3274]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=3274 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:04:59.723000 audit[3274]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff1bf8340 a2=0 a3=1 items=0 ppid=3158 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.723000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 13 00:04:59.730000 audit[3276]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=3276 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:04:59.730000 audit[3276]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffeb327e80 a2=0 a3=1 items=0 ppid=3158 pid=3276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.730000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 13 00:04:59.734000 audit[3277]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=3277 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:04:59.734000 audit[3277]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd9a087c0 a2=0 a3=1 items=0 ppid=3158 pid=3277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.734000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 13 00:04:59.740000 audit[3279]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=3279 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:04:59.740000 audit[3279]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd09e0950 a2=0 a3=1 items=0 ppid=3158 pid=3279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.740000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 13 00:04:59.758000 audit[3282]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=3282 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:04:59.758000 audit[3282]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcb498c80 a2=0 a3=1 items=0 ppid=3158 pid=3282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.758000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 13 00:04:59.767000 audit[3285]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=3285 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:04:59.767000 audit[3285]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc094df10 a2=0 a3=1 items=0 ppid=3158 pid=3285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.767000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Sep 13 00:04:59.769000 audit[3286]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=3286 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:04:59.769000 audit[3286]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd8c390a0 a2=0 a3=1 items=0 ppid=3158 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.769000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 13 00:04:59.774000 audit[3288]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=3288 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:04:59.774000 audit[3288]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffdd99e140 a2=0 a3=1 items=0 ppid=3158 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.774000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:04:59.781000 audit[3291]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=3291 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:04:59.781000 audit[3291]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffda042ed0 a2=0 a3=1 items=0 ppid=3158 pid=3291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.781000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:04:59.784000 audit[3292]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=3292 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:04:59.784000 audit[3292]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd0a9a590 a2=0 a3=1 items=0 ppid=3158 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.784000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 13 00:04:59.789000 audit[3294]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=3294 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:04:59.789000 audit[3294]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffd2542fd0 a2=0 a3=1 items=0 ppid=3158 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.789000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 13 00:04:59.792000 audit[3295]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3295 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:04:59.792000 audit[3295]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff0a542a0 a2=0 a3=1 items=0 ppid=3158 pid=3295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.792000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 13 00:04:59.797000 audit[3297]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3297 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:04:59.797000 audit[3297]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd337b3f0 a2=0 a3=1 items=0 ppid=3158 pid=3297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.797000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:04:59.804000 audit[3300]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3300 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:04:59.804000 audit[3300]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff7e629c0 a2=0 a3=1 items=0 ppid=3158 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.804000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:04:59.811000 audit[3302]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=3302 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 13 00:04:59.811000 audit[3302]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffc2256c70 a2=0 a3=1 items=0 ppid=3158 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.811000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:04:59.812000 audit[3302]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=3302 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 13 00:04:59.812000 audit[3302]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffc2256c70 a2=0 a3=1 items=0 ppid=3158 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:04:59.812000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:05:00.059943 kubelet[3053]: E0913 00:05:00.059903 3053 projected.go:288] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:05:00.060555 kubelet[3053]: E0913 00:05:00.060531 3053 projected.go:194] Error preparing data for projected volume kube-api-access-9npxp for pod tigera-operator/tigera-operator-58fc44c59b-tpzfq: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:05:00.060739 kubelet[3053]: E0913 00:05:00.060718 3053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4c989a6c-20bd-412d-b06b-e5f57575cdca-kube-api-access-9npxp podName:4c989a6c-20bd-412d-b06b-e5f57575cdca nodeName:}" failed. No retries permitted until 2025-09-13 00:05:00.560684777 +0000 UTC m=+6.277984946 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9npxp" (UniqueName: "kubernetes.io/projected/4c989a6c-20bd-412d-b06b-e5f57575cdca-kube-api-access-9npxp") pod "tigera-operator-58fc44c59b-tpzfq" (UID: "4c989a6c-20bd-412d-b06b-e5f57575cdca") : failed to sync configmap cache: timed out waiting for the condition Sep 13 00:05:00.729411 env[1920]: time="2025-09-13T00:05:00.729276002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-tpzfq,Uid:4c989a6c-20bd-412d-b06b-e5f57575cdca,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:05:00.772809 env[1920]: time="2025-09-13T00:05:00.772616767Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:00.772954 env[1920]: time="2025-09-13T00:05:00.772814960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:00.772954 env[1920]: time="2025-09-13T00:05:00.772897965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:00.773571 env[1920]: time="2025-09-13T00:05:00.773443718Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0bd6613285972bb746fe4d767d8bd8cc3bf42bdb707f8be76b1ab43df025451d pid=3312 runtime=io.containerd.runc.v2 Sep 13 00:05:00.888624 env[1920]: time="2025-09-13T00:05:00.887566753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-tpzfq,Uid:4c989a6c-20bd-412d-b06b-e5f57575cdca,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0bd6613285972bb746fe4d767d8bd8cc3bf42bdb707f8be76b1ab43df025451d\"" Sep 13 00:05:00.892905 env[1920]: time="2025-09-13T00:05:00.892711640Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:05:02.472426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4055456103.mount: Deactivated successfully. Sep 13 00:05:03.646791 env[1920]: time="2025-09-13T00:05:03.646712148Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:03.652784 env[1920]: time="2025-09-13T00:05:03.652694070Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:03.657119 env[1920]: time="2025-09-13T00:05:03.657057096Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:03.660804 env[1920]: time="2025-09-13T00:05:03.660722113Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:03.662180 env[1920]: time="2025-09-13T00:05:03.662135687Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 13 00:05:03.669535 env[1920]: time="2025-09-13T00:05:03.669105012Z" level=info msg="CreateContainer within sandbox \"0bd6613285972bb746fe4d767d8bd8cc3bf42bdb707f8be76b1ab43df025451d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:05:03.692203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4030920323.mount: Deactivated successfully. Sep 13 00:05:03.709205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3665569622.mount: Deactivated successfully. Sep 13 00:05:03.719740 env[1920]: time="2025-09-13T00:05:03.719656982Z" level=info msg="CreateContainer within sandbox \"0bd6613285972bb746fe4d767d8bd8cc3bf42bdb707f8be76b1ab43df025451d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"181471d92a63981385fbce8434a3c02be7a0950ee77b60083640be9fc54ef304\"" Sep 13 00:05:03.722953 env[1920]: time="2025-09-13T00:05:03.720851830Z" level=info msg="StartContainer for \"181471d92a63981385fbce8434a3c02be7a0950ee77b60083640be9fc54ef304\"" Sep 13 00:05:03.845976 env[1920]: time="2025-09-13T00:05:03.845904324Z" level=info msg="StartContainer for \"181471d92a63981385fbce8434a3c02be7a0950ee77b60083640be9fc54ef304\" returns successfully" Sep 13 00:05:04.719174 kubelet[3053]: I0913 00:05:04.719077 3053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-tpzfq" podStartSLOduration=3.944493124 podStartE2EDuration="6.719006339s" podCreationTimestamp="2025-09-13 00:04:58 +0000 UTC" firstStartedPulling="2025-09-13 00:05:00.88963113 +0000 UTC m=+6.606931299" lastFinishedPulling="2025-09-13 00:05:03.664144357 +0000 UTC m=+9.381444514" observedRunningTime="2025-09-13 00:05:04.714930272 +0000 UTC m=+10.432230477" watchObservedRunningTime="2025-09-13 00:05:04.719006339 +0000 UTC m=+10.436306508" Sep 13 00:05:08.128683 amazon-ssm-agent[1894]: 2025-09-13 00:05:08 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Sep 13 00:05:10.590000 audit[2231]: USER_END pid=2231 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:05:10.590977 sudo[2231]: pam_unix(sudo:session): session closed for user root Sep 13 00:05:10.593074 kernel: kauditd_printk_skb: 143 callbacks suppressed Sep 13 00:05:10.593215 kernel: audit: type=1106 audit(1757721910.590:278): pid=2231 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:05:10.590000 audit[2231]: CRED_DISP pid=2231 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:05:10.617763 kernel: audit: type=1104 audit(1757721910.590:279): pid=2231 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:05:10.625020 sshd[2227]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:10.631498 systemd[1]: sshd@6-172.31.29.1:22-139.178.89.65:36614.service: Deactivated successfully. Sep 13 00:05:10.634095 systemd-logind[1911]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:05:10.635602 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:05:10.627000 audit[2227]: USER_END pid=2227 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:05:10.647225 kernel: audit: type=1106 audit(1757721910.627:280): pid=2227 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:05:10.647725 systemd-logind[1911]: Removed session 7. Sep 13 00:05:10.627000 audit[2227]: CRED_DISP pid=2227 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:05:10.675873 kernel: audit: type=1104 audit(1757721910.627:281): pid=2227 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:05:10.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.29.1:22-139.178.89.65:36614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:10.688499 kernel: audit: type=1131 audit(1757721910.631:282): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.29.1:22-139.178.89.65:36614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:13.974000 audit[3432]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=3432 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:05:13.974000 audit[3432]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd85affa0 a2=0 a3=1 items=0 ppid=3158 pid=3432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:14.000397 kernel: audit: type=1325 audit(1757721913.974:283): table=filter:89 family=2 entries=15 op=nft_register_rule pid=3432 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:05:14.000526 kernel: audit: type=1300 audit(1757721913.974:283): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd85affa0 a2=0 a3=1 items=0 ppid=3158 pid=3432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:13.974000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:05:13.984000 audit[3432]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=3432 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:05:14.015098 kernel: audit: type=1327 audit(1757721913.974:283): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:05:14.015237 kernel: audit: type=1325 audit(1757721913.984:284): table=nat:90 family=2 entries=12 op=nft_register_rule pid=3432 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:05:13.984000 audit[3432]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd85affa0 a2=0 a3=1 items=0 ppid=3158 pid=3432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:14.028689 kernel: audit: type=1300 audit(1757721913.984:284): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd85affa0 a2=0 a3=1 items=0 ppid=3158 pid=3432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:13.984000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:05:14.017000 audit[3434]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=3434 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:05:14.017000 audit[3434]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc9512610 a2=0 a3=1 items=0 ppid=3158 pid=3434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:14.017000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:05:14.038000 audit[3434]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=3434 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:05:14.038000 audit[3434]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc9512610 a2=0 a3=1 items=0 ppid=3158 pid=3434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:14.038000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:05:23.185744 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 13 00:05:23.185937 kernel: audit: type=1325 audit(1757721923.173:287): table=filter:93 family=2 entries=16 op=nft_register_rule pid=3436 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:05:23.173000 audit[3436]: NETFILTER_CFG table=filter:93 family=2 entries=16 op=nft_register_rule pid=3436 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:05:23.173000 audit[3436]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffa33d300 a2=0 a3=1 items=0 ppid=3158 pid=3436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:23.173000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:05:23.216719 kernel: audit: type=1300 audit(1757721923.173:287): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffa33d300 a2=0 a3=1 items=0 ppid=3158 pid=3436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:23.216872 kernel: audit: type=1327 audit(1757721923.173:287): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:05:23.204000 audit[3436]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=3436 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:05:23.204000 audit[3436]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffa33d300 a2=0 a3=1 items=0 ppid=3158 pid=3436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:23.242000 kernel: audit: type=1325 audit(1757721923.204:288): table=nat:94 family=2 entries=12 op=nft_register_rule pid=3436 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:05:23.242148 kernel: audit: type=1300 audit(1757721923.204:288): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffa33d300 a2=0 a3=1 items=0 ppid=3158 pid=3436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:23.204000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:05:23.249863 kernel: audit: type=1327 audit(1757721923.204:288): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:05:23.298000 audit[3438]: NETFILTER_CFG table=filter:95 family=2 entries=18 op=nft_register_rule pid=3438 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:05:23.306845 kernel: audit: type=1325 audit(1757721923.298:289): table=filter:95 family=2 entries=18 op=nft_register_rule pid=3438 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:05:23.298000 audit[3438]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc9cc9e80 a2=0 a3=1 items=0 ppid=3158 pid=3438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:23.298000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:05:23.333264 kernel: audit: type=1300 audit(1757721923.298:289): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc9cc9e80 a2=0 a3=1 items=0 ppid=3158 pid=3438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:23.333433 kernel: audit: type=1327 audit(1757721923.298:289): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:05:23.334000 audit[3438]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=3438 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:05:23.334000 audit[3438]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc9cc9e80 a2=0 a3=1 items=0 ppid=3158 pid=3438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:23.334000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:05:23.350812 kernel: audit: type=1325 audit(1757721923.334:290): table=nat:96 family=2 entries=12 op=nft_register_rule pid=3438 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:05:23.516483 kubelet[3053]: I0913 00:05:23.516327 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7705b051-b96a-4768-862c-71f469a3fe06-tigera-ca-bundle\") pod \"calico-typha-788786457-9wvjh\" (UID: \"7705b051-b96a-4768-862c-71f469a3fe06\") " pod="calico-system/calico-typha-788786457-9wvjh" Sep 13 00:05:23.517450 kubelet[3053]: I0913 00:05:23.517368 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7705b051-b96a-4768-862c-71f469a3fe06-typha-certs\") pod \"calico-typha-788786457-9wvjh\" (UID: \"7705b051-b96a-4768-862c-71f469a3fe06\") " pod="calico-system/calico-typha-788786457-9wvjh" Sep 13 00:05:23.518727 kubelet[3053]: I0913 00:05:23.517699 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs4r7\" (UniqueName: \"kubernetes.io/projected/7705b051-b96a-4768-862c-71f469a3fe06-kube-api-access-hs4r7\") pod \"calico-typha-788786457-9wvjh\" (UID: \"7705b051-b96a-4768-862c-71f469a3fe06\") " pod="calico-system/calico-typha-788786457-9wvjh" Sep 13 00:05:23.764751 env[1920]: time="2025-09-13T00:05:23.764110130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-788786457-9wvjh,Uid:7705b051-b96a-4768-862c-71f469a3fe06,Namespace:calico-system,Attempt:0,}" Sep 13 00:05:23.814578 env[1920]: time="2025-09-13T00:05:23.814426862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:23.814949 env[1920]: time="2025-09-13T00:05:23.814881075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:23.815188 env[1920]: time="2025-09-13T00:05:23.815129103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:23.815737 env[1920]: time="2025-09-13T00:05:23.815635168Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5410dadaefc5a28f5923da8aa8c8736114b1636a79cb154abdfcac25889d1d86 pid=3447 runtime=io.containerd.runc.v2 Sep 13 00:05:23.826834 kubelet[3053]: I0913 00:05:23.826045 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b015bd6a-9f20-4b64-96af-760000ccb5f9-node-certs\") pod \"calico-node-pwhpc\" (UID: \"b015bd6a-9f20-4b64-96af-760000ccb5f9\") " pod="calico-system/calico-node-pwhpc" Sep 13 00:05:23.826834 kubelet[3053]: I0913 00:05:23.826134 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b015bd6a-9f20-4b64-96af-760000ccb5f9-var-run-calico\") pod \"calico-node-pwhpc\" (UID: \"b015bd6a-9f20-4b64-96af-760000ccb5f9\") " pod="calico-system/calico-node-pwhpc" Sep 13 00:05:23.826834 kubelet[3053]: I0913 00:05:23.826185 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b015bd6a-9f20-4b64-96af-760000ccb5f9-cni-log-dir\") pod \"calico-node-pwhpc\" (UID: \"b015bd6a-9f20-4b64-96af-760000ccb5f9\") " pod="calico-system/calico-node-pwhpc" Sep 13 00:05:23.826834 kubelet[3053]: I0913 00:05:23.826226 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b015bd6a-9f20-4b64-96af-760000ccb5f9-flexvol-driver-host\") pod \"calico-node-pwhpc\" (UID: \"b015bd6a-9f20-4b64-96af-760000ccb5f9\") " pod="calico-system/calico-node-pwhpc" Sep 13 00:05:23.826834 kubelet[3053]: I0913 00:05:23.826279 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b015bd6a-9f20-4b64-96af-760000ccb5f9-xtables-lock\") pod \"calico-node-pwhpc\" (UID: \"b015bd6a-9f20-4b64-96af-760000ccb5f9\") " pod="calico-system/calico-node-pwhpc" Sep 13 00:05:23.827292 kubelet[3053]: I0913 00:05:23.826319 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rhwr\" (UniqueName: \"kubernetes.io/projected/b015bd6a-9f20-4b64-96af-760000ccb5f9-kube-api-access-4rhwr\") pod \"calico-node-pwhpc\" (UID: \"b015bd6a-9f20-4b64-96af-760000ccb5f9\") " pod="calico-system/calico-node-pwhpc" Sep 13 00:05:23.827292 kubelet[3053]: I0913 00:05:23.826359 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b015bd6a-9f20-4b64-96af-760000ccb5f9-cni-bin-dir\") pod \"calico-node-pwhpc\" (UID: \"b015bd6a-9f20-4b64-96af-760000ccb5f9\") " pod="calico-system/calico-node-pwhpc" Sep 13 00:05:23.827292 kubelet[3053]: I0913 00:05:23.826403 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b015bd6a-9f20-4b64-96af-760000ccb5f9-cni-net-dir\") pod \"calico-node-pwhpc\" (UID: \"b015bd6a-9f20-4b64-96af-760000ccb5f9\") " pod="calico-system/calico-node-pwhpc" Sep 13 00:05:23.827292 kubelet[3053]: I0913 00:05:23.826448 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b015bd6a-9f20-4b64-96af-760000ccb5f9-var-lib-calico\") pod \"calico-node-pwhpc\" (UID: \"b015bd6a-9f20-4b64-96af-760000ccb5f9\") " pod="calico-system/calico-node-pwhpc" Sep 13 00:05:23.827292 kubelet[3053]: I0913 00:05:23.826530 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b015bd6a-9f20-4b64-96af-760000ccb5f9-tigera-ca-bundle\") pod \"calico-node-pwhpc\" (UID: \"b015bd6a-9f20-4b64-96af-760000ccb5f9\") " pod="calico-system/calico-node-pwhpc" Sep 13 00:05:23.827601 kubelet[3053]: I0913 00:05:23.826582 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b015bd6a-9f20-4b64-96af-760000ccb5f9-lib-modules\") pod \"calico-node-pwhpc\" (UID: \"b015bd6a-9f20-4b64-96af-760000ccb5f9\") " pod="calico-system/calico-node-pwhpc" Sep 13 00:05:23.827601 kubelet[3053]: I0913 00:05:23.826619 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b015bd6a-9f20-4b64-96af-760000ccb5f9-policysync\") pod \"calico-node-pwhpc\" (UID: \"b015bd6a-9f20-4b64-96af-760000ccb5f9\") " pod="calico-system/calico-node-pwhpc" Sep 13 00:05:23.933071 kubelet[3053]: E0913 00:05:23.932991 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:23.933071 kubelet[3053]: W0913 00:05:23.933048 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:23.933334 kubelet[3053]: E0913 00:05:23.933125 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:23.937042 kubelet[3053]: E0913 00:05:23.936974 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:23.937042 kubelet[3053]: W0913 00:05:23.937027 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:23.937294 kubelet[3053]: E0913 00:05:23.937066 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:23.938810 kubelet[3053]: E0913 00:05:23.938719 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:23.938810 kubelet[3053]: W0913 00:05:23.938786 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:23.939023 kubelet[3053]: E0913 00:05:23.938837 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:23.942302 kubelet[3053]: E0913 00:05:23.942241 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:23.942302 kubelet[3053]: W0913 00:05:23.942286 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:23.942722 kubelet[3053]: E0913 00:05:23.942604 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:23.942722 kubelet[3053]: E0913 00:05:23.942718 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:23.943077 kubelet[3053]: W0913 00:05:23.942738 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:23.943077 kubelet[3053]: E0913 00:05:23.943022 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:23.943378 kubelet[3053]: E0913 00:05:23.943323 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:23.943378 kubelet[3053]: W0913 00:05:23.943358 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:23.943596 kubelet[3053]: E0913 00:05:23.943556 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:23.943952 kubelet[3053]: E0913 00:05:23.943892 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:23.943952 kubelet[3053]: W0913 00:05:23.943926 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:23.944191 kubelet[3053]: E0913 00:05:23.944126 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:23.949836 kubelet[3053]: E0913 00:05:23.944405 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:23.949836 kubelet[3053]: W0913 00:05:23.944437 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:23.949836 kubelet[3053]: E0913 00:05:23.944620 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:23.949836 kubelet[3053]: E0913 00:05:23.944924 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:23.949836 kubelet[3053]: W0913 00:05:23.944948 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:23.949836 kubelet[3053]: E0913 00:05:23.945131 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:23.949836 kubelet[3053]: E0913 00:05:23.945930 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:23.949836 kubelet[3053]: W0913 00:05:23.945962 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:23.949836 kubelet[3053]: E0913 00:05:23.946182 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:23.949836 kubelet[3053]: E0913 00:05:23.947948 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:23.950653 kubelet[3053]: W0913 00:05:23.947980 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:23.950653 kubelet[3053]: E0913 00:05:23.948199 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:23.950653 kubelet[3053]: E0913 00:05:23.948496 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:23.950653 kubelet[3053]: W0913 00:05:23.948524 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:23.950653 kubelet[3053]: E0913 00:05:23.948715 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:23.950653 kubelet[3053]: E0913 00:05:23.949131 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:23.950653 kubelet[3053]: W0913 00:05:23.949165 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:23.950653 kubelet[3053]: E0913 00:05:23.949358 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:23.950653 kubelet[3053]: E0913 00:05:23.949686 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:23.950653 kubelet[3053]: W0913 00:05:23.949713 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:23.951305 kubelet[3053]: E0913 00:05:23.949954 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:23.952112 kubelet[3053]: E0913 00:05:23.952031 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:23.952112 kubelet[3053]: W0913 00:05:23.952076 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:23.952406 kubelet[3053]: E0913 00:05:23.952312 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:23.952799 kubelet[3053]: E0913 00:05:23.952717 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:23.952977 kubelet[3053]: W0913 00:05:23.952753 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:23.953096 kubelet[3053]: E0913 00:05:23.953046 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:23.956746 kubelet[3053]: E0913 00:05:23.953360 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:23.956746 kubelet[3053]: W0913 00:05:23.953416 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:23.956746 kubelet[3053]: E0913 00:05:23.955802 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:23.956746 kubelet[3053]: W0913 00:05:23.955835 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:23.956746 kubelet[3053]: E0913 00:05:23.956001 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:23.956746 kubelet[3053]: E0913 00:05:23.956121 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:23.956746 kubelet[3053]: E0913 00:05:23.956309 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:23.956746 kubelet[3053]: W0913 00:05:23.956331 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:23.957476 kubelet[3053]: E0913 00:05:23.956788 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:23.957476 kubelet[3053]: W0913 00:05:23.956816 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:23.960915 kubelet[3053]: E0913 00:05:23.960811 3053 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cdkj7" podUID="b0810e85-a772-4d04-a431-1cf9fa4bc8f8" Sep 13 00:05:23.961998 kubelet[3053]: E0913 00:05:23.961398 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:23.961998 kubelet[3053]: E0913 00:05:23.961480 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:23.964386 kubelet[3053]: E0913 00:05:23.964257 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:23.964386 kubelet[3053]: W0913 00:05:23.964306 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:23.964386 kubelet[3053]: E0913 00:05:23.964385 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:23.966400 kubelet[3053]: E0913 00:05:23.966313 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:23.966400 kubelet[3053]: W0913 00:05:23.966375 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:23.966620 kubelet[3053]: E0913 00:05:23.966414 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:23.998142 kubelet[3053]: E0913 00:05:23.997739 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:23.998142 kubelet[3053]: W0913 00:05:23.997818 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:23.998142 kubelet[3053]: E0913 00:05:23.997856 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:23.999020 kubelet[3053]: E0913 00:05:23.998604 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:23.999020 kubelet[3053]: W0913 00:05:23.998662 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:23.999020 kubelet[3053]: E0913 00:05:23.998699 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:23.999953 kubelet[3053]: E0913 00:05:23.999492 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:23.999953 kubelet[3053]: W0913 00:05:23.999547 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:23.999953 kubelet[3053]: E0913 00:05:23.999582 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.000872 kubelet[3053]: E0913 00:05:24.000420 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.000872 kubelet[3053]: W0913 00:05:24.000478 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.000872 kubelet[3053]: E0913 00:05:24.000514 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.001613 kubelet[3053]: E0913 00:05:24.001292 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.001613 kubelet[3053]: W0913 00:05:24.001326 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.001613 kubelet[3053]: E0913 00:05:24.001359 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.006051 kubelet[3053]: E0913 00:05:24.004014 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.006051 kubelet[3053]: W0913 00:05:24.004059 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.006051 kubelet[3053]: E0913 00:05:24.004099 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.008881 kubelet[3053]: E0913 00:05:24.006874 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.008881 kubelet[3053]: W0913 00:05:24.006919 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.008881 kubelet[3053]: E0913 00:05:24.006956 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.008881 kubelet[3053]: E0913 00:05:24.007445 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.008881 kubelet[3053]: W0913 00:05:24.007473 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.008881 kubelet[3053]: E0913 00:05:24.007502 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.008881 kubelet[3053]: E0913 00:05:24.008097 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.008881 kubelet[3053]: W0913 00:05:24.008124 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.008881 kubelet[3053]: E0913 00:05:24.008153 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.008881 kubelet[3053]: E0913 00:05:24.008557 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.009592 kubelet[3053]: W0913 00:05:24.008581 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.009592 kubelet[3053]: E0913 00:05:24.008606 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.009592 kubelet[3053]: E0913 00:05:24.009069 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.009592 kubelet[3053]: W0913 00:05:24.009096 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.009592 kubelet[3053]: E0913 00:05:24.009126 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.009592 kubelet[3053]: E0913 00:05:24.009534 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.009592 kubelet[3053]: W0913 00:05:24.009557 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.009592 kubelet[3053]: E0913 00:05:24.009583 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.016269 kubelet[3053]: E0913 00:05:24.011364 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.016269 kubelet[3053]: W0913 00:05:24.011408 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.016269 kubelet[3053]: E0913 00:05:24.011445 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.016269 kubelet[3053]: E0913 00:05:24.011915 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.016269 kubelet[3053]: W0913 00:05:24.011943 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.016269 kubelet[3053]: E0913 00:05:24.011971 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.016269 kubelet[3053]: E0913 00:05:24.012379 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.016269 kubelet[3053]: W0913 00:05:24.012404 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.016269 kubelet[3053]: E0913 00:05:24.012432 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.025382 kubelet[3053]: E0913 00:05:24.025319 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.025382 kubelet[3053]: W0913 00:05:24.025357 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.025634 kubelet[3053]: E0913 00:05:24.025410 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.029985 kubelet[3053]: E0913 00:05:24.027973 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.030183 kubelet[3053]: W0913 00:05:24.029989 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.030183 kubelet[3053]: E0913 00:05:24.030036 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.030573 kubelet[3053]: E0913 00:05:24.030489 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.030573 kubelet[3053]: W0913 00:05:24.030534 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.030573 kubelet[3053]: E0913 00:05:24.030565 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.031254 kubelet[3053]: E0913 00:05:24.031185 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.031254 kubelet[3053]: W0913 00:05:24.031231 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.031468 kubelet[3053]: E0913 00:05:24.031267 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.031753 kubelet[3053]: E0913 00:05:24.031695 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.031753 kubelet[3053]: W0913 00:05:24.031737 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.032005 kubelet[3053]: E0913 00:05:24.031789 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.039667 kubelet[3053]: E0913 00:05:24.035907 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.039667 kubelet[3053]: W0913 00:05:24.035945 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.039667 kubelet[3053]: E0913 00:05:24.036011 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.039667 kubelet[3053]: I0913 00:05:24.036067 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b0810e85-a772-4d04-a431-1cf9fa4bc8f8-varrun\") pod \"csi-node-driver-cdkj7\" (UID: \"b0810e85-a772-4d04-a431-1cf9fa4bc8f8\") " pod="calico-system/csi-node-driver-cdkj7" Sep 13 00:05:24.039667 kubelet[3053]: E0913 00:05:24.037101 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.039667 kubelet[3053]: W0913 00:05:24.037148 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.039667 kubelet[3053]: E0913 00:05:24.037187 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.039667 kubelet[3053]: I0913 00:05:24.037237 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b0810e85-a772-4d04-a431-1cf9fa4bc8f8-registration-dir\") pod \"csi-node-driver-cdkj7\" (UID: \"b0810e85-a772-4d04-a431-1cf9fa4bc8f8\") " pod="calico-system/csi-node-driver-cdkj7" Sep 13 00:05:24.045152 kubelet[3053]: E0913 00:05:24.040620 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.045152 kubelet[3053]: W0913 00:05:24.040662 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.045152 kubelet[3053]: E0913 00:05:24.040698 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.045152 kubelet[3053]: I0913 00:05:24.040746 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8955\" (UniqueName: \"kubernetes.io/projected/b0810e85-a772-4d04-a431-1cf9fa4bc8f8-kube-api-access-w8955\") pod \"csi-node-driver-cdkj7\" (UID: \"b0810e85-a772-4d04-a431-1cf9fa4bc8f8\") " pod="calico-system/csi-node-driver-cdkj7" Sep 13 00:05:24.046061 kubelet[3053]: E0913 00:05:24.045708 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.046061 kubelet[3053]: W0913 00:05:24.045749 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.046061 kubelet[3053]: E0913 00:05:24.045816 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.053511 kubelet[3053]: E0913 00:05:24.053157 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.053511 kubelet[3053]: W0913 00:05:24.053194 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.053511 kubelet[3053]: E0913 00:05:24.053234 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.053511 kubelet[3053]: I0913 00:05:24.053287 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0810e85-a772-4d04-a431-1cf9fa4bc8f8-kubelet-dir\") pod \"csi-node-driver-cdkj7\" (UID: \"b0810e85-a772-4d04-a431-1cf9fa4bc8f8\") " pod="calico-system/csi-node-driver-cdkj7" Sep 13 00:05:24.062853 kubelet[3053]: E0913 00:05:24.062806 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.063094 kubelet[3053]: W0913 00:05:24.063056 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.063429 kubelet[3053]: E0913 00:05:24.063395 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.063691 kubelet[3053]: I0913 00:05:24.063658 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b0810e85-a772-4d04-a431-1cf9fa4bc8f8-socket-dir\") pod \"csi-node-driver-cdkj7\" (UID: \"b0810e85-a772-4d04-a431-1cf9fa4bc8f8\") " pod="calico-system/csi-node-driver-cdkj7" Sep 13 00:05:24.072826 kubelet[3053]: E0913 00:05:24.063976 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.072826 kubelet[3053]: W0913 00:05:24.064019 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.072826 kubelet[3053]: E0913 00:05:24.064068 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.072826 kubelet[3053]: E0913 00:05:24.064619 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.072826 kubelet[3053]: W0913 00:05:24.064647 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.072826 kubelet[3053]: E0913 00:05:24.064680 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.072826 kubelet[3053]: E0913 00:05:24.065215 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.072826 kubelet[3053]: W0913 00:05:24.065242 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.072826 kubelet[3053]: E0913 00:05:24.065272 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.072826 kubelet[3053]: E0913 00:05:24.065830 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.073544 kubelet[3053]: W0913 00:05:24.065858 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.073544 kubelet[3053]: E0913 00:05:24.065890 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.073544 kubelet[3053]: E0913 00:05:24.066381 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.073544 kubelet[3053]: W0913 00:05:24.066409 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.073544 kubelet[3053]: E0913 00:05:24.066439 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.073544 kubelet[3053]: E0913 00:05:24.067958 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.073544 kubelet[3053]: W0913 00:05:24.067992 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.073544 kubelet[3053]: E0913 00:05:24.068027 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.073544 kubelet[3053]: E0913 00:05:24.071804 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.073544 kubelet[3053]: W0913 00:05:24.071837 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.074258 kubelet[3053]: E0913 00:05:24.072347 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.074258 kubelet[3053]: W0913 00:05:24.072373 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.074258 kubelet[3053]: E0913 00:05:24.072838 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.074258 kubelet[3053]: W0913 00:05:24.072863 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.074258 kubelet[3053]: E0913 00:05:24.072912 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.074258 kubelet[3053]: E0913 00:05:24.072938 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.074258 kubelet[3053]: E0913 00:05:24.072962 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.080635 kubelet[3053]: E0913 00:05:24.075725 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.080635 kubelet[3053]: W0913 00:05:24.075793 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.080635 kubelet[3053]: E0913 00:05:24.075830 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.080635 kubelet[3053]: E0913 00:05:24.076344 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.080635 kubelet[3053]: W0913 00:05:24.076372 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.080635 kubelet[3053]: E0913 00:05:24.076399 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.088097 env[1920]: time="2025-09-13T00:05:24.080104672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pwhpc,Uid:b015bd6a-9f20-4b64-96af-760000ccb5f9,Namespace:calico-system,Attempt:0,}" Sep 13 00:05:24.131685 env[1920]: time="2025-09-13T00:05:24.131585148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:24.139811 env[1920]: time="2025-09-13T00:05:24.131971536Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:24.140191 env[1920]: time="2025-09-13T00:05:24.140118243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:24.144149 env[1920]: time="2025-09-13T00:05:24.143856550Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b0d05ecce5441b63f30fa55c52c4d5e24635954183210cd2c5d3057621f51d3 pid=3553 runtime=io.containerd.runc.v2 Sep 13 00:05:24.165717 kubelet[3053]: E0913 00:05:24.165653 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.165717 kubelet[3053]: W0913 00:05:24.165698 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.165963 kubelet[3053]: E0913 00:05:24.165736 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.166580 kubelet[3053]: E0913 00:05:24.166530 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.166580 kubelet[3053]: W0913 00:05:24.166570 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.166808 kubelet[3053]: E0913 00:05:24.166613 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.170052 kubelet[3053]: E0913 00:05:24.169987 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.170052 kubelet[3053]: W0913 00:05:24.170036 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.170284 kubelet[3053]: E0913 00:05:24.170089 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.170875 kubelet[3053]: E0913 00:05:24.170814 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.170875 kubelet[3053]: W0913 00:05:24.170856 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.174167 kubelet[3053]: E0913 00:05:24.171073 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.174167 kubelet[3053]: E0913 00:05:24.173983 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.174167 kubelet[3053]: W0913 00:05:24.174016 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.174167 kubelet[3053]: E0913 00:05:24.174103 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.175034 kubelet[3053]: E0913 00:05:24.174979 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.175034 kubelet[3053]: W0913 00:05:24.175020 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.175319 kubelet[3053]: E0913 00:05:24.175261 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.177069 kubelet[3053]: E0913 00:05:24.176994 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.177069 kubelet[3053]: W0913 00:05:24.177042 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.177524 kubelet[3053]: E0913 00:05:24.177313 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.179722 kubelet[3053]: E0913 00:05:24.179666 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.179722 kubelet[3053]: W0913 00:05:24.179710 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.180162 kubelet[3053]: E0913 00:05:24.180004 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.185042 kubelet[3053]: E0913 00:05:24.184983 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.185042 kubelet[3053]: W0913 00:05:24.185029 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.185575 kubelet[3053]: E0913 00:05:24.185315 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.193073 kubelet[3053]: E0913 00:05:24.193007 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.193073 kubelet[3053]: W0913 00:05:24.193054 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.193336 kubelet[3053]: E0913 00:05:24.193287 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.193720 kubelet[3053]: E0913 00:05:24.193672 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.193720 kubelet[3053]: W0913 00:05:24.193707 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.193989 kubelet[3053]: E0913 00:05:24.193936 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.195363 kubelet[3053]: E0913 00:05:24.194350 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.195363 kubelet[3053]: W0913 00:05:24.194392 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.195363 kubelet[3053]: E0913 00:05:24.194907 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.195363 kubelet[3053]: W0913 00:05:24.194941 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.195363 kubelet[3053]: E0913 00:05:24.195266 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.195363 kubelet[3053]: E0913 00:05:24.195314 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.195897 kubelet[3053]: E0913 00:05:24.195418 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.195897 kubelet[3053]: W0913 00:05:24.195438 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.195897 kubelet[3053]: E0913 00:05:24.195607 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.196085 kubelet[3053]: E0913 00:05:24.195900 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.196085 kubelet[3053]: W0913 00:05:24.195921 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.196201 kubelet[3053]: E0913 00:05:24.196115 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.199583 kubelet[3053]: E0913 00:05:24.196424 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.199583 kubelet[3053]: W0913 00:05:24.196458 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.199583 kubelet[3053]: E0913 00:05:24.196650 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.202842 kubelet[3053]: E0913 00:05:24.202388 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.202842 kubelet[3053]: W0913 00:05:24.202432 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.202842 kubelet[3053]: E0913 00:05:24.202637 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.205828 kubelet[3053]: E0913 00:05:24.204973 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.205828 kubelet[3053]: W0913 00:05:24.205020 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.205828 kubelet[3053]: E0913 00:05:24.205231 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.205828 kubelet[3053]: E0913 00:05:24.205543 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.205828 kubelet[3053]: W0913 00:05:24.205568 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.206330 kubelet[3053]: E0913 00:05:24.205880 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.206330 kubelet[3053]: E0913 00:05:24.206180 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.206330 kubelet[3053]: W0913 00:05:24.206202 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.206622 kubelet[3053]: E0913 00:05:24.206549 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.210846 kubelet[3053]: E0913 00:05:24.209968 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.210846 kubelet[3053]: W0913 00:05:24.210016 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.210846 kubelet[3053]: E0913 00:05:24.210283 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.210846 kubelet[3053]: E0913 00:05:24.210600 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.210846 kubelet[3053]: W0913 00:05:24.210626 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.210846 kubelet[3053]: E0913 00:05:24.210852 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.211373 kubelet[3053]: E0913 00:05:24.211156 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.211373 kubelet[3053]: W0913 00:05:24.211178 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.211620 kubelet[3053]: E0913 00:05:24.211545 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.211620 kubelet[3053]: E0913 00:05:24.211591 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.211620 kubelet[3053]: W0913 00:05:24.211610 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.212114 kubelet[3053]: E0913 00:05:24.211642 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.219667 kubelet[3053]: E0913 00:05:24.219602 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.219667 kubelet[3053]: W0913 00:05:24.219643 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.219980 kubelet[3053]: E0913 00:05:24.219682 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.269105 kubelet[3053]: E0913 00:05:24.269062 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:24.270089 kubelet[3053]: W0913 00:05:24.270013 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:24.270352 kubelet[3053]: E0913 00:05:24.270313 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:24.379412 env[1920]: time="2025-09-13T00:05:24.379252127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-788786457-9wvjh,Uid:7705b051-b96a-4768-862c-71f469a3fe06,Namespace:calico-system,Attempt:0,} returns sandbox id \"5410dadaefc5a28f5923da8aa8c8736114b1636a79cb154abdfcac25889d1d86\"" Sep 13 00:05:24.387916 env[1920]: time="2025-09-13T00:05:24.387791798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:05:24.429000 audit[3614]: NETFILTER_CFG table=filter:97 family=2 entries=21 op=nft_register_rule pid=3614 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:05:24.429000 audit[3614]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe5836490 a2=0 a3=1 items=0 ppid=3158 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:24.429000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:05:24.445000 audit[3614]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=3614 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:05:24.445000 audit[3614]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe5836490 a2=0 a3=1 items=0 ppid=3158 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:24.445000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:05:24.461844 env[1920]: time="2025-09-13T00:05:24.461580807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pwhpc,Uid:b015bd6a-9f20-4b64-96af-760000ccb5f9,Namespace:calico-system,Attempt:0,} returns sandbox id \"5b0d05ecce5441b63f30fa55c52c4d5e24635954183210cd2c5d3057621f51d3\"" Sep 13 00:05:24.666595 systemd[1]: run-containerd-runc-k8s.io-5410dadaefc5a28f5923da8aa8c8736114b1636a79cb154abdfcac25889d1d86-runc.vw1qEq.mount: Deactivated successfully. Sep 13 00:05:25.611090 kubelet[3053]: E0913 00:05:25.610295 3053 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cdkj7" podUID="b0810e85-a772-4d04-a431-1cf9fa4bc8f8" Sep 13 00:05:25.718643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount628046235.mount: Deactivated successfully. Sep 13 00:05:27.458449 env[1920]: time="2025-09-13T00:05:27.458382132Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:27.460863 env[1920]: time="2025-09-13T00:05:27.460796968Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:27.463173 env[1920]: time="2025-09-13T00:05:27.463112575Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:27.465507 env[1920]: time="2025-09-13T00:05:27.465446638Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:27.466772 env[1920]: time="2025-09-13T00:05:27.466703688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 13 00:05:27.470116 env[1920]: time="2025-09-13T00:05:27.470062457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:05:27.509117 env[1920]: time="2025-09-13T00:05:27.509033091Z" level=info msg="CreateContainer within sandbox \"5410dadaefc5a28f5923da8aa8c8736114b1636a79cb154abdfcac25889d1d86\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:05:27.531496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount676121146.mount: Deactivated successfully. Sep 13 00:05:27.537013 env[1920]: time="2025-09-13T00:05:27.536927756Z" level=info msg="CreateContainer within sandbox \"5410dadaefc5a28f5923da8aa8c8736114b1636a79cb154abdfcac25889d1d86\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e4fca8cb74d31f85a6ab161f5c2f36846ef68dfb4b1ec242846e4b6f43a6849a\"" Sep 13 00:05:27.540548 env[1920]: time="2025-09-13T00:05:27.540481153Z" level=info msg="StartContainer for \"e4fca8cb74d31f85a6ab161f5c2f36846ef68dfb4b1ec242846e4b6f43a6849a\"" Sep 13 00:05:27.610558 kubelet[3053]: E0913 00:05:27.610475 3053 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cdkj7" podUID="b0810e85-a772-4d04-a431-1cf9fa4bc8f8" Sep 13 00:05:27.679735 env[1920]: time="2025-09-13T00:05:27.679660479Z" level=info msg="StartContainer for \"e4fca8cb74d31f85a6ab161f5c2f36846ef68dfb4b1ec242846e4b6f43a6849a\" returns successfully" Sep 13 00:05:27.790535 kubelet[3053]: E0913 00:05:27.790476 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.790535 kubelet[3053]: W0913 00:05:27.790537 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.790796 kubelet[3053]: E0913 00:05:27.790572 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:27.791141 kubelet[3053]: E0913 00:05:27.791099 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.791141 kubelet[3053]: W0913 00:05:27.791133 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.791348 kubelet[3053]: E0913 00:05:27.791163 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:27.791736 kubelet[3053]: E0913 00:05:27.791694 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.791870 kubelet[3053]: W0913 00:05:27.791733 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.791870 kubelet[3053]: E0913 00:05:27.791763 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:27.792230 kubelet[3053]: E0913 00:05:27.792187 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.792230 kubelet[3053]: W0913 00:05:27.792220 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.792421 kubelet[3053]: E0913 00:05:27.792246 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:27.792716 kubelet[3053]: E0913 00:05:27.792671 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.792716 kubelet[3053]: W0913 00:05:27.792704 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.792939 kubelet[3053]: E0913 00:05:27.792733 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:27.793216 kubelet[3053]: E0913 00:05:27.793162 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.793216 kubelet[3053]: W0913 00:05:27.793206 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.793393 kubelet[3053]: E0913 00:05:27.793233 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:27.794291 kubelet[3053]: E0913 00:05:27.793625 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.794291 kubelet[3053]: W0913 00:05:27.793659 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.794291 kubelet[3053]: E0913 00:05:27.793686 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:27.794605 kubelet[3053]: E0913 00:05:27.794557 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.794605 kubelet[3053]: W0913 00:05:27.794598 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.794822 kubelet[3053]: E0913 00:05:27.794630 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:27.796824 kubelet[3053]: E0913 00:05:27.795226 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.796824 kubelet[3053]: W0913 00:05:27.795261 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.796824 kubelet[3053]: E0913 00:05:27.795290 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:27.796824 kubelet[3053]: E0913 00:05:27.795643 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.796824 kubelet[3053]: W0913 00:05:27.795661 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.796824 kubelet[3053]: E0913 00:05:27.795682 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:27.796824 kubelet[3053]: E0913 00:05:27.796091 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.796824 kubelet[3053]: W0913 00:05:27.796110 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.796824 kubelet[3053]: E0913 00:05:27.796138 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:27.796824 kubelet[3053]: E0913 00:05:27.796443 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.797608 kubelet[3053]: W0913 00:05:27.796459 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.797608 kubelet[3053]: E0913 00:05:27.796480 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:27.797608 kubelet[3053]: E0913 00:05:27.796870 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.797608 kubelet[3053]: W0913 00:05:27.796891 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.797608 kubelet[3053]: E0913 00:05:27.796920 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:27.798052 kubelet[3053]: E0913 00:05:27.798000 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.798052 kubelet[3053]: W0913 00:05:27.798040 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.798230 kubelet[3053]: E0913 00:05:27.798073 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:27.799821 kubelet[3053]: E0913 00:05:27.798586 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.799821 kubelet[3053]: W0913 00:05:27.798623 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.799821 kubelet[3053]: E0913 00:05:27.798653 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:27.816508 kubelet[3053]: E0913 00:05:27.816449 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.816508 kubelet[3053]: W0913 00:05:27.816494 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.816753 kubelet[3053]: E0913 00:05:27.816531 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:27.817176 kubelet[3053]: E0913 00:05:27.817129 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.817281 kubelet[3053]: W0913 00:05:27.817175 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.817281 kubelet[3053]: E0913 00:05:27.817210 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:27.817738 kubelet[3053]: E0913 00:05:27.817705 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.817902 kubelet[3053]: W0913 00:05:27.817737 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.817902 kubelet[3053]: E0913 00:05:27.817810 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:27.818314 kubelet[3053]: E0913 00:05:27.818281 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.818314 kubelet[3053]: W0913 00:05:27.818313 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.818505 kubelet[3053]: E0913 00:05:27.818474 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:27.818826 kubelet[3053]: E0913 00:05:27.818757 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.818826 kubelet[3053]: W0913 00:05:27.818819 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.819005 kubelet[3053]: E0913 00:05:27.818973 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:27.819300 kubelet[3053]: E0913 00:05:27.819269 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.819397 kubelet[3053]: W0913 00:05:27.819299 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.819492 kubelet[3053]: E0913 00:05:27.819453 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:27.819811 kubelet[3053]: E0913 00:05:27.819740 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.819811 kubelet[3053]: W0913 00:05:27.819799 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.819985 kubelet[3053]: E0913 00:05:27.819832 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:27.820259 kubelet[3053]: E0913 00:05:27.820215 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.820374 kubelet[3053]: W0913 00:05:27.820258 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.820442 kubelet[3053]: E0913 00:05:27.820413 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:27.820724 kubelet[3053]: E0913 00:05:27.820693 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.820887 kubelet[3053]: W0913 00:05:27.820723 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.820962 kubelet[3053]: E0913 00:05:27.820919 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:27.821355 kubelet[3053]: E0913 00:05:27.821313 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.821465 kubelet[3053]: W0913 00:05:27.821352 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.821736 kubelet[3053]: E0913 00:05:27.821579 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:27.821906 kubelet[3053]: E0913 00:05:27.821873 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.822002 kubelet[3053]: W0913 00:05:27.821905 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.822002 kubelet[3053]: E0913 00:05:27.821938 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:27.823324 kubelet[3053]: E0913 00:05:27.822691 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.823324 kubelet[3053]: W0913 00:05:27.822738 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.823324 kubelet[3053]: E0913 00:05:27.822974 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:27.823324 kubelet[3053]: E0913 00:05:27.823220 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.823324 kubelet[3053]: W0913 00:05:27.823240 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.823963 kubelet[3053]: E0913 00:05:27.823410 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:27.823963 kubelet[3053]: E0913 00:05:27.823593 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.823963 kubelet[3053]: W0913 00:05:27.823611 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.823963 kubelet[3053]: E0913 00:05:27.823640 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:27.824214 kubelet[3053]: E0913 00:05:27.824140 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.824214 kubelet[3053]: W0913 00:05:27.824163 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.824214 kubelet[3053]: E0913 00:05:27.824201 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:27.827175 kubelet[3053]: E0913 00:05:27.826894 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.827175 kubelet[3053]: W0913 00:05:27.826931 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.827175 kubelet[3053]: E0913 00:05:27.827061 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:27.828050 kubelet[3053]: E0913 00:05:27.827998 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.828050 kubelet[3053]: W0913 00:05:27.828034 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.828279 kubelet[3053]: E0913 00:05:27.828067 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:27.829610 kubelet[3053]: E0913 00:05:27.829412 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:27.829610 kubelet[3053]: W0913 00:05:27.829451 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:27.829610 kubelet[3053]: E0913 00:05:27.829483 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.816530 kubelet[3053]: I0913 00:05:28.814662 3053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-788786457-9wvjh" podStartSLOduration=2.732101904 podStartE2EDuration="5.814640907s" podCreationTimestamp="2025-09-13 00:05:23 +0000 UTC" firstStartedPulling="2025-09-13 00:05:24.386879989 +0000 UTC m=+30.104180158" lastFinishedPulling="2025-09-13 00:05:27.469419004 +0000 UTC m=+33.186719161" observedRunningTime="2025-09-13 00:05:27.811497017 +0000 UTC m=+33.528797198" watchObservedRunningTime="2025-09-13 00:05:28.814640907 +0000 UTC m=+34.531941076" Sep 13 00:05:28.819241 kubelet[3053]: E0913 00:05:28.818960 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.819241 kubelet[3053]: W0913 00:05:28.819001 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.819241 kubelet[3053]: E0913 00:05:28.819035 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.823607 kubelet[3053]: E0913 00:05:28.823334 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.823607 kubelet[3053]: W0913 00:05:28.823382 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.823607 kubelet[3053]: E0913 00:05:28.823415 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.828137 kubelet[3053]: E0913 00:05:28.827839 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.828137 kubelet[3053]: W0913 00:05:28.827874 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.828137 kubelet[3053]: E0913 00:05:28.827908 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.832280 kubelet[3053]: E0913 00:05:28.831943 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.832280 kubelet[3053]: W0913 00:05:28.831979 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.832280 kubelet[3053]: E0913 00:05:28.832013 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.836255 kubelet[3053]: E0913 00:05:28.832675 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.836255 kubelet[3053]: W0913 00:05:28.832702 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.836255 kubelet[3053]: E0913 00:05:28.832733 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.839002 kubelet[3053]: E0913 00:05:28.836434 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.839002 kubelet[3053]: W0913 00:05:28.836468 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.839002 kubelet[3053]: E0913 00:05:28.836528 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.842182 kubelet[3053]: E0913 00:05:28.841909 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.842182 kubelet[3053]: W0913 00:05:28.841944 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.842182 kubelet[3053]: E0913 00:05:28.841977 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.843050 kubelet[3053]: E0913 00:05:28.842632 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.843050 kubelet[3053]: W0913 00:05:28.842664 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.843050 kubelet[3053]: E0913 00:05:28.842692 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.843820 kubelet[3053]: E0913 00:05:28.843410 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.843820 kubelet[3053]: W0913 00:05:28.843439 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.843820 kubelet[3053]: E0913 00:05:28.843470 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.854925 kernel: kauditd_printk_skb: 8 callbacks suppressed Sep 13 00:05:28.855064 kernel: audit: type=1325 audit(1757721928.846:293): table=filter:99 family=2 entries=21 op=nft_register_rule pid=3708 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:05:28.846000 audit[3708]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=3708 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:05:28.855670 kubelet[3053]: E0913 00:05:28.855354 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.855670 kubelet[3053]: W0913 00:05:28.855389 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.855670 kubelet[3053]: E0913 00:05:28.855423 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.856299 kubelet[3053]: E0913 00:05:28.856052 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.856299 kubelet[3053]: W0913 00:05:28.856077 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.856299 kubelet[3053]: E0913 00:05:28.856107 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.857082 kubelet[3053]: E0913 00:05:28.856690 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.857082 kubelet[3053]: W0913 00:05:28.856716 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.857082 kubelet[3053]: E0913 00:05:28.856744 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.857763 kubelet[3053]: E0913 00:05:28.857539 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.857763 kubelet[3053]: W0913 00:05:28.857566 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.857763 kubelet[3053]: E0913 00:05:28.857594 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.858325 kubelet[3053]: E0913 00:05:28.858139 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.858325 kubelet[3053]: W0913 00:05:28.858161 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.858325 kubelet[3053]: E0913 00:05:28.858182 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.858733 kubelet[3053]: E0913 00:05:28.858641 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.846000 audit[3708]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffde602df0 a2=0 a3=1 items=0 ppid=3158 pid=3708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:28.871088 kubelet[3053]: W0913 00:05:28.866831 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.871088 kubelet[3053]: E0913 00:05:28.866916 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.871936 kubelet[3053]: E0913 00:05:28.871882 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.871936 kubelet[3053]: W0913 00:05:28.871924 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.872156 kubelet[3053]: E0913 00:05:28.871959 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.874355 kubelet[3053]: E0913 00:05:28.872924 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.874355 kubelet[3053]: W0913 00:05:28.874347 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.874571 kubelet[3053]: E0913 00:05:28.874397 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.881051 kernel: audit: type=1300 audit(1757721928.846:293): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffde602df0 a2=0 a3=1 items=0 ppid=3158 pid=3708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:28.881178 kernel: audit: type=1327 audit(1757721928.846:293): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:05:28.846000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:05:28.881291 kubelet[3053]: E0913 00:05:28.877557 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.881291 kubelet[3053]: W0913 00:05:28.877585 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.881562 kubelet[3053]: E0913 00:05:28.881522 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.873000 audit[3708]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=3708 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:05:28.882261 kubelet[3053]: E0913 00:05:28.882107 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.882261 kubelet[3053]: W0913 00:05:28.882130 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.882467 kubelet[3053]: E0913 00:05:28.882438 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.888674 kernel: audit: type=1325 audit(1757721928.873:294): table=nat:100 family=2 entries=19 op=nft_register_chain pid=3708 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:05:28.888850 kubelet[3053]: E0913 00:05:28.882824 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.888850 kubelet[3053]: W0913 00:05:28.882845 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.889156 kubelet[3053]: E0913 00:05:28.889118 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.873000 audit[3708]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffde602df0 a2=0 a3=1 items=0 ppid=3158 pid=3708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:28.890053 kubelet[3053]: E0913 00:05:28.889881 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.890053 kubelet[3053]: W0913 00:05:28.889908 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.890259 kubelet[3053]: E0913 00:05:28.890227 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.902178 kernel: audit: type=1300 audit(1757721928.873:294): arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffde602df0 a2=0 a3=1 items=0 ppid=3158 pid=3708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:28.902296 kubelet[3053]: E0913 00:05:28.890596 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.902296 kubelet[3053]: W0913 00:05:28.890618 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.902600 kubelet[3053]: E0913 00:05:28.902564 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.873000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:05:28.903113 kubelet[3053]: E0913 00:05:28.903062 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.903113 kubelet[3053]: W0913 00:05:28.903094 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.903321 kubelet[3053]: E0913 00:05:28.903293 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.908985 kernel: audit: type=1327 audit(1757721928.873:294): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:05:28.909104 kubelet[3053]: E0913 00:05:28.903483 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.909104 kubelet[3053]: W0913 00:05:28.903511 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.909333 kubelet[3053]: E0913 00:05:28.909281 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.909871 kubelet[3053]: E0913 00:05:28.909822 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.909871 kubelet[3053]: W0913 00:05:28.909857 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.910044 kubelet[3053]: E0913 00:05:28.909902 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.910330 kubelet[3053]: E0913 00:05:28.910302 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.910431 kubelet[3053]: W0913 00:05:28.910330 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.910557 kubelet[3053]: E0913 00:05:28.910530 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.910676 kubelet[3053]: E0913 00:05:28.910644 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.910676 kubelet[3053]: W0913 00:05:28.910670 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.911031 kubelet[3053]: E0913 00:05:28.910995 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.911031 kubelet[3053]: W0913 00:05:28.911023 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.911189 kubelet[3053]: E0913 00:05:28.911046 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.911398 kubelet[3053]: E0913 00:05:28.911362 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.911398 kubelet[3053]: W0913 00:05:28.911390 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.911522 kubelet[3053]: E0913 00:05:28.911412 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.911734 kubelet[3053]: E0913 00:05:28.911696 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.912257 kubelet[3053]: E0913 00:05:28.912225 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.912453 kubelet[3053]: W0913 00:05:28.912421 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.912588 kubelet[3053]: E0913 00:05:28.912564 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.913160 kubelet[3053]: E0913 00:05:28.913129 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.913371 kubelet[3053]: W0913 00:05:28.913342 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.913519 kubelet[3053]: E0913 00:05:28.913493 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.914568 kubelet[3053]: E0913 00:05:28.914529 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.914568 kubelet[3053]: W0913 00:05:28.914566 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.914758 kubelet[3053]: E0913 00:05:28.914597 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:28.915949 kubelet[3053]: E0913 00:05:28.915909 3053 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:05:28.915949 kubelet[3053]: W0913 00:05:28.915946 3053 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:05:28.916258 kubelet[3053]: E0913 00:05:28.915978 3053 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:05:29.004936 env[1920]: time="2025-09-13T00:05:29.004867846Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:29.007468 env[1920]: time="2025-09-13T00:05:29.007399741Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:29.009927 env[1920]: time="2025-09-13T00:05:29.009871276Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:29.013497 env[1920]: time="2025-09-13T00:05:29.013423665Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:29.014801 env[1920]: time="2025-09-13T00:05:29.014698859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 13 00:05:29.020550 env[1920]: time="2025-09-13T00:05:29.020467098Z" level=info msg="CreateContainer within sandbox \"5b0d05ecce5441b63f30fa55c52c4d5e24635954183210cd2c5d3057621f51d3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:05:29.044505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount782672218.mount: Deactivated successfully. Sep 13 00:05:29.051089 env[1920]: time="2025-09-13T00:05:29.050966650Z" level=info msg="CreateContainer within sandbox \"5b0d05ecce5441b63f30fa55c52c4d5e24635954183210cd2c5d3057621f51d3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c903aaf520d0f4f7419b84aa2f6f3650575bbbf1e9ba030efe99d22fe0e4b347\"" Sep 13 00:05:29.054675 env[1920]: time="2025-09-13T00:05:29.054592371Z" level=info msg="StartContainer for \"c903aaf520d0f4f7419b84aa2f6f3650575bbbf1e9ba030efe99d22fe0e4b347\"" Sep 13 00:05:29.212854 env[1920]: time="2025-09-13T00:05:29.208486402Z" level=info msg="StartContainer for \"c903aaf520d0f4f7419b84aa2f6f3650575bbbf1e9ba030efe99d22fe0e4b347\" returns successfully" Sep 13 00:05:29.483752 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c903aaf520d0f4f7419b84aa2f6f3650575bbbf1e9ba030efe99d22fe0e4b347-rootfs.mount: Deactivated successfully. Sep 13 00:05:29.611437 kubelet[3053]: E0913 00:05:29.610740 3053 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cdkj7" podUID="b0810e85-a772-4d04-a431-1cf9fa4bc8f8" Sep 13 00:05:29.677782 env[1920]: time="2025-09-13T00:05:29.677676055Z" level=info msg="shim disconnected" id=c903aaf520d0f4f7419b84aa2f6f3650575bbbf1e9ba030efe99d22fe0e4b347 Sep 13 00:05:29.678051 env[1920]: time="2025-09-13T00:05:29.677759335Z" level=warning msg="cleaning up after shim disconnected" id=c903aaf520d0f4f7419b84aa2f6f3650575bbbf1e9ba030efe99d22fe0e4b347 namespace=k8s.io Sep 13 00:05:29.678051 env[1920]: time="2025-09-13T00:05:29.677810335Z" level=info msg="cleaning up dead shim" Sep 13 00:05:29.693089 env[1920]: time="2025-09-13T00:05:29.693024339Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:05:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3784 runtime=io.containerd.runc.v2\n" Sep 13 00:05:29.796444 env[1920]: time="2025-09-13T00:05:29.796360085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:05:31.611360 kubelet[3053]: E0913 00:05:31.611283 3053 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cdkj7" podUID="b0810e85-a772-4d04-a431-1cf9fa4bc8f8" Sep 13 00:05:33.610748 kubelet[3053]: E0913 00:05:33.610636 3053 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cdkj7" podUID="b0810e85-a772-4d04-a431-1cf9fa4bc8f8" Sep 13 00:05:33.899956 amazon-ssm-agent[1894]: 2025-09-13 00:05:33 INFO [HealthCheck] HealthCheck reporting agent health. Sep 13 00:05:34.451248 env[1920]: time="2025-09-13T00:05:34.451157335Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:34.455531 env[1920]: time="2025-09-13T00:05:34.455437655Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:34.460669 env[1920]: time="2025-09-13T00:05:34.460486936Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:34.465569 env[1920]: time="2025-09-13T00:05:34.464091451Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:34.465569 env[1920]: time="2025-09-13T00:05:34.465378116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 13 00:05:34.475215 env[1920]: time="2025-09-13T00:05:34.475133729Z" level=info msg="CreateContainer within sandbox \"5b0d05ecce5441b63f30fa55c52c4d5e24635954183210cd2c5d3057621f51d3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:05:34.505966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1829343839.mount: Deactivated successfully. Sep 13 00:05:34.509314 env[1920]: time="2025-09-13T00:05:34.509193614Z" level=info msg="CreateContainer within sandbox \"5b0d05ecce5441b63f30fa55c52c4d5e24635954183210cd2c5d3057621f51d3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fd8470802d28fe15b5277e53462434fea448cfe72ab5dc81ce464fe32691dc51\"" Sep 13 00:05:34.511652 env[1920]: time="2025-09-13T00:05:34.511584100Z" level=info msg="StartContainer for \"fd8470802d28fe15b5277e53462434fea448cfe72ab5dc81ce464fe32691dc51\"" Sep 13 00:05:34.581185 systemd[1]: run-containerd-runc-k8s.io-fd8470802d28fe15b5277e53462434fea448cfe72ab5dc81ce464fe32691dc51-runc.1VA8N2.mount: Deactivated successfully. Sep 13 00:05:34.674239 env[1920]: time="2025-09-13T00:05:34.674059537Z" level=info msg="StartContainer for \"fd8470802d28fe15b5277e53462434fea448cfe72ab5dc81ce464fe32691dc51\" returns successfully" Sep 13 00:05:35.610751 kubelet[3053]: E0913 00:05:35.610636 3053 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cdkj7" podUID="b0810e85-a772-4d04-a431-1cf9fa4bc8f8" Sep 13 00:05:35.984880 env[1920]: time="2025-09-13T00:05:35.984666045Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:05:35.999522 kubelet[3053]: I0913 00:05:35.999452 3053 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:05:36.041707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd8470802d28fe15b5277e53462434fea448cfe72ab5dc81ce464fe32691dc51-rootfs.mount: Deactivated successfully. Sep 13 00:05:36.236510 kubelet[3053]: I0913 00:05:36.235953 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/527909b5-007b-4b06-af5d-92e50eb61126-calico-apiserver-certs\") pod \"calico-apiserver-8978b56b9-mstsf\" (UID: \"527909b5-007b-4b06-af5d-92e50eb61126\") " pod="calico-apiserver/calico-apiserver-8978b56b9-mstsf" Sep 13 00:05:36.236510 kubelet[3053]: I0913 00:05:36.236094 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d0a25691-16ce-4727-a17e-3adde345074c-calico-apiserver-certs\") pod \"calico-apiserver-8978b56b9-tpkzf\" (UID: \"d0a25691-16ce-4727-a17e-3adde345074c\") " pod="calico-apiserver/calico-apiserver-8978b56b9-tpkzf" Sep 13 00:05:36.236510 kubelet[3053]: I0913 00:05:36.236146 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42009314-cb57-4945-ba05-c28efb80272b-config-volume\") pod \"coredns-7c65d6cfc9-fkdwg\" (UID: \"42009314-cb57-4945-ba05-c28efb80272b\") " pod="kube-system/coredns-7c65d6cfc9-fkdwg" Sep 13 00:05:36.236510 kubelet[3053]: I0913 00:05:36.236237 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfxlf\" (UniqueName: \"kubernetes.io/projected/527909b5-007b-4b06-af5d-92e50eb61126-kube-api-access-tfxlf\") pod \"calico-apiserver-8978b56b9-mstsf\" (UID: \"527909b5-007b-4b06-af5d-92e50eb61126\") " pod="calico-apiserver/calico-apiserver-8978b56b9-mstsf" Sep 13 00:05:36.236510 kubelet[3053]: I0913 00:05:36.236312 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce9497b-6716-4542-9084-aaea79149a75-goldmane-ca-bundle\") pod \"goldmane-7988f88666-6v2qn\" (UID: \"dce9497b-6716-4542-9084-aaea79149a75\") " pod="calico-system/goldmane-7988f88666-6v2qn" Sep 13 00:05:36.237118 kubelet[3053]: I0913 00:05:36.236391 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/dce9497b-6716-4542-9084-aaea79149a75-goldmane-key-pair\") pod \"goldmane-7988f88666-6v2qn\" (UID: \"dce9497b-6716-4542-9084-aaea79149a75\") " pod="calico-system/goldmane-7988f88666-6v2qn" Sep 13 00:05:36.237118 kubelet[3053]: I0913 00:05:36.236436 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5pmh\" (UniqueName: \"kubernetes.io/projected/dce9497b-6716-4542-9084-aaea79149a75-kube-api-access-j5pmh\") pod \"goldmane-7988f88666-6v2qn\" (UID: \"dce9497b-6716-4542-9084-aaea79149a75\") " pod="calico-system/goldmane-7988f88666-6v2qn" Sep 13 00:05:36.237118 kubelet[3053]: I0913 00:05:36.236587 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce9497b-6716-4542-9084-aaea79149a75-config\") pod \"goldmane-7988f88666-6v2qn\" (UID: \"dce9497b-6716-4542-9084-aaea79149a75\") " pod="calico-system/goldmane-7988f88666-6v2qn" Sep 13 00:05:36.237118 kubelet[3053]: I0913 00:05:36.236658 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfngt\" (UniqueName: \"kubernetes.io/projected/42009314-cb57-4945-ba05-c28efb80272b-kube-api-access-wfngt\") pod \"coredns-7c65d6cfc9-fkdwg\" (UID: \"42009314-cb57-4945-ba05-c28efb80272b\") " pod="kube-system/coredns-7c65d6cfc9-fkdwg" Sep 13 00:05:36.237118 kubelet[3053]: I0913 00:05:36.236705 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqzrx\" (UniqueName: \"kubernetes.io/projected/d0a25691-16ce-4727-a17e-3adde345074c-kube-api-access-xqzrx\") pod \"calico-apiserver-8978b56b9-tpkzf\" (UID: \"d0a25691-16ce-4727-a17e-3adde345074c\") " pod="calico-apiserver/calico-apiserver-8978b56b9-tpkzf" Sep 13 00:05:36.237509 kubelet[3053]: I0913 00:05:36.236813 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5015b2dd-c6f7-46e4-9af1-65597994d2b9-tigera-ca-bundle\") pod \"calico-kube-controllers-6b97f6f77f-zv7t8\" (UID: \"5015b2dd-c6f7-46e4-9af1-65597994d2b9\") " pod="calico-system/calico-kube-controllers-6b97f6f77f-zv7t8" Sep 13 00:05:36.237509 kubelet[3053]: I0913 00:05:36.236886 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7x6c\" (UniqueName: \"kubernetes.io/projected/5015b2dd-c6f7-46e4-9af1-65597994d2b9-kube-api-access-m7x6c\") pod \"calico-kube-controllers-6b97f6f77f-zv7t8\" (UID: \"5015b2dd-c6f7-46e4-9af1-65597994d2b9\") " pod="calico-system/calico-kube-controllers-6b97f6f77f-zv7t8" Sep 13 00:05:36.237509 kubelet[3053]: I0913 00:05:36.236934 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8xgs\" (UniqueName: \"kubernetes.io/projected/e3f25ad3-33b2-4d46-a48c-0b632c9b8329-kube-api-access-d8xgs\") pod \"coredns-7c65d6cfc9-frxhj\" (UID: \"e3f25ad3-33b2-4d46-a48c-0b632c9b8329\") " pod="kube-system/coredns-7c65d6cfc9-frxhj" Sep 13 00:05:36.237509 kubelet[3053]: I0913 00:05:36.237026 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3f25ad3-33b2-4d46-a48c-0b632c9b8329-config-volume\") pod \"coredns-7c65d6cfc9-frxhj\" (UID: \"e3f25ad3-33b2-4d46-a48c-0b632c9b8329\") " pod="kube-system/coredns-7c65d6cfc9-frxhj" Sep 13 00:05:36.406908 env[1920]: time="2025-09-13T00:05:36.406815235Z" level=info msg="shim disconnected" id=fd8470802d28fe15b5277e53462434fea448cfe72ab5dc81ce464fe32691dc51 Sep 13 00:05:36.406908 env[1920]: time="2025-09-13T00:05:36.406905763Z" level=warning msg="cleaning up after shim disconnected" id=fd8470802d28fe15b5277e53462434fea448cfe72ab5dc81ce464fe32691dc51 namespace=k8s.io Sep 13 00:05:36.407327 env[1920]: time="2025-09-13T00:05:36.406932163Z" level=info msg="cleaning up dead shim" Sep 13 00:05:36.495933 kubelet[3053]: I0913 00:05:36.493927 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mmtf\" (UniqueName: \"kubernetes.io/projected/fea8edb8-d901-4812-96e3-2689d15918fb-kube-api-access-7mmtf\") pod \"whisker-79ccfb85c9-62fmz\" (UID: \"fea8edb8-d901-4812-96e3-2689d15918fb\") " pod="calico-system/whisker-79ccfb85c9-62fmz" Sep 13 00:05:36.495933 kubelet[3053]: I0913 00:05:36.494065 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fea8edb8-d901-4812-96e3-2689d15918fb-whisker-backend-key-pair\") pod \"whisker-79ccfb85c9-62fmz\" (UID: \"fea8edb8-d901-4812-96e3-2689d15918fb\") " pod="calico-system/whisker-79ccfb85c9-62fmz" Sep 13 00:05:36.495933 kubelet[3053]: I0913 00:05:36.494170 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fea8edb8-d901-4812-96e3-2689d15918fb-whisker-ca-bundle\") pod \"whisker-79ccfb85c9-62fmz\" (UID: \"fea8edb8-d901-4812-96e3-2689d15918fb\") " pod="calico-system/whisker-79ccfb85c9-62fmz" Sep 13 00:05:36.505353 env[1920]: time="2025-09-13T00:05:36.498979595Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:05:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3865 runtime=io.containerd.runc.v2\n" Sep 13 00:05:36.564821 env[1920]: time="2025-09-13T00:05:36.564198593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fkdwg,Uid:42009314-cb57-4945-ba05-c28efb80272b,Namespace:kube-system,Attempt:0,}" Sep 13 00:05:36.581094 env[1920]: time="2025-09-13T00:05:36.580996315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-frxhj,Uid:e3f25ad3-33b2-4d46-a48c-0b632c9b8329,Namespace:kube-system,Attempt:0,}" Sep 13 00:05:36.614943 env[1920]: time="2025-09-13T00:05:36.613682026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-6v2qn,Uid:dce9497b-6716-4542-9084-aaea79149a75,Namespace:calico-system,Attempt:0,}" Sep 13 00:05:36.628278 env[1920]: time="2025-09-13T00:05:36.628173526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8978b56b9-mstsf,Uid:527909b5-007b-4b06-af5d-92e50eb61126,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:05:36.641487 env[1920]: time="2025-09-13T00:05:36.640165256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8978b56b9-tpkzf,Uid:d0a25691-16ce-4727-a17e-3adde345074c,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:05:36.754524 env[1920]: time="2025-09-13T00:05:36.753723162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b97f6f77f-zv7t8,Uid:5015b2dd-c6f7-46e4-9af1-65597994d2b9,Namespace:calico-system,Attempt:0,}" Sep 13 00:05:36.830851 env[1920]: time="2025-09-13T00:05:36.828152660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:05:36.927323 env[1920]: time="2025-09-13T00:05:36.927249174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79ccfb85c9-62fmz,Uid:fea8edb8-d901-4812-96e3-2689d15918fb,Namespace:calico-system,Attempt:0,}" Sep 13 00:05:37.032457 env[1920]: time="2025-09-13T00:05:37.032337447Z" level=error msg="Failed to destroy network for sandbox \"9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:37.033282 env[1920]: time="2025-09-13T00:05:37.033159592Z" level=error msg="encountered an error cleaning up failed sandbox \"9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:37.033382 env[1920]: time="2025-09-13T00:05:37.033276220Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-frxhj,Uid:e3f25ad3-33b2-4d46-a48c-0b632c9b8329,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:37.034179 kubelet[3053]: E0913 00:05:37.033738 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:37.034179 kubelet[3053]: E0913 00:05:37.033896 3053 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-frxhj" Sep 13 00:05:37.034179 kubelet[3053]: E0913 00:05:37.033937 3053 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-frxhj" Sep 13 00:05:37.035108 kubelet[3053]: E0913 00:05:37.034078 3053 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-frxhj_kube-system(e3f25ad3-33b2-4d46-a48c-0b632c9b8329)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-frxhj_kube-system(e3f25ad3-33b2-4d46-a48c-0b632c9b8329)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-frxhj" podUID="e3f25ad3-33b2-4d46-a48c-0b632c9b8329" Sep 13 00:05:37.139556 env[1920]: time="2025-09-13T00:05:37.139474335Z" level=error msg="Failed to destroy network for sandbox \"dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:37.144822 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b-shm.mount: Deactivated successfully. Sep 13 00:05:37.147286 env[1920]: time="2025-09-13T00:05:37.146471588Z" level=error msg="encountered an error cleaning up failed sandbox \"dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:37.147286 env[1920]: time="2025-09-13T00:05:37.146577956Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-6v2qn,Uid:dce9497b-6716-4542-9084-aaea79149a75,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:37.150585 kubelet[3053]: E0913 00:05:37.149647 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:37.150585 kubelet[3053]: E0913 00:05:37.149816 3053 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-6v2qn" Sep 13 00:05:37.150585 kubelet[3053]: E0913 00:05:37.149854 3053 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-6v2qn" Sep 13 00:05:37.151006 kubelet[3053]: E0913 00:05:37.150381 3053 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-6v2qn_calico-system(dce9497b-6716-4542-9084-aaea79149a75)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-6v2qn_calico-system(dce9497b-6716-4542-9084-aaea79149a75)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-6v2qn" podUID="dce9497b-6716-4542-9084-aaea79149a75" Sep 13 00:05:37.186895 env[1920]: time="2025-09-13T00:05:37.186815535Z" level=error msg="Failed to destroy network for sandbox \"dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:37.192466 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec-shm.mount: Deactivated successfully. Sep 13 00:05:37.198745 env[1920]: time="2025-09-13T00:05:37.198536868Z" level=error msg="encountered an error cleaning up failed sandbox \"dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:37.199289 env[1920]: time="2025-09-13T00:05:37.199113289Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fkdwg,Uid:42009314-cb57-4945-ba05-c28efb80272b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:37.202007 kubelet[3053]: E0913 00:05:37.201922 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:37.202202 kubelet[3053]: E0913 00:05:37.202020 3053 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-fkdwg" Sep 13 00:05:37.202202 kubelet[3053]: E0913 00:05:37.202059 3053 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-fkdwg" Sep 13 00:05:37.202202 kubelet[3053]: E0913 00:05:37.202131 3053 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-fkdwg_kube-system(42009314-cb57-4945-ba05-c28efb80272b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-fkdwg_kube-system(42009314-cb57-4945-ba05-c28efb80272b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-fkdwg" podUID="42009314-cb57-4945-ba05-c28efb80272b" Sep 13 00:05:37.233076 env[1920]: time="2025-09-13T00:05:37.232993851Z" level=error msg="Failed to destroy network for sandbox \"7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:37.238320 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec-shm.mount: Deactivated successfully. Sep 13 00:05:37.243036 env[1920]: time="2025-09-13T00:05:37.242948771Z" level=error msg="encountered an error cleaning up failed sandbox \"7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:37.243486 env[1920]: time="2025-09-13T00:05:37.243412775Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8978b56b9-mstsf,Uid:527909b5-007b-4b06-af5d-92e50eb61126,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:37.244243 kubelet[3053]: E0913 00:05:37.244042 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:37.244243 kubelet[3053]: E0913 00:05:37.244173 3053 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8978b56b9-mstsf" Sep 13 00:05:37.244243 kubelet[3053]: E0913 00:05:37.244212 3053 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8978b56b9-mstsf" Sep 13 00:05:37.244557 kubelet[3053]: E0913 00:05:37.244274 3053 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8978b56b9-mstsf_calico-apiserver(527909b5-007b-4b06-af5d-92e50eb61126)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8978b56b9-mstsf_calico-apiserver(527909b5-007b-4b06-af5d-92e50eb61126)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8978b56b9-mstsf" podUID="527909b5-007b-4b06-af5d-92e50eb61126" Sep 13 00:05:37.267886 env[1920]: time="2025-09-13T00:05:37.267727854Z" level=error msg="Failed to destroy network for sandbox \"b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:37.277045 env[1920]: time="2025-09-13T00:05:37.273574115Z" level=error msg="encountered an error cleaning up failed sandbox \"b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:37.277045 env[1920]: time="2025-09-13T00:05:37.273685571Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b97f6f77f-zv7t8,Uid:5015b2dd-c6f7-46e4-9af1-65597994d2b9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:37.277368 kubelet[3053]: E0913 00:05:37.274038 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:37.277368 kubelet[3053]: E0913 00:05:37.274125 3053 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b97f6f77f-zv7t8" Sep 13 00:05:37.277368 kubelet[3053]: E0913 00:05:37.274176 3053 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b97f6f77f-zv7t8" Sep 13 00:05:37.273151 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00-shm.mount: Deactivated successfully. Sep 13 00:05:37.278185 kubelet[3053]: E0913 00:05:37.274243 3053 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b97f6f77f-zv7t8_calico-system(5015b2dd-c6f7-46e4-9af1-65597994d2b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b97f6f77f-zv7t8_calico-system(5015b2dd-c6f7-46e4-9af1-65597994d2b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b97f6f77f-zv7t8" podUID="5015b2dd-c6f7-46e4-9af1-65597994d2b9" Sep 13 00:05:37.285320 env[1920]: time="2025-09-13T00:05:37.285131660Z" level=error msg="Failed to destroy network for sandbox \"0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:37.286454 env[1920]: time="2025-09-13T00:05:37.286380513Z" level=error msg="encountered an error cleaning up failed sandbox \"0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:37.286736 env[1920]: time="2025-09-13T00:05:37.286656369Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8978b56b9-tpkzf,Uid:d0a25691-16ce-4727-a17e-3adde345074c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:37.288277 kubelet[3053]: E0913 00:05:37.287293 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:37.288277 kubelet[3053]: E0913 00:05:37.287431 3053 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8978b56b9-tpkzf" Sep 13 00:05:37.288277 kubelet[3053]: E0913 00:05:37.287470 3053 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8978b56b9-tpkzf" Sep 13 00:05:37.288682 kubelet[3053]: E0913 00:05:37.287694 3053 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8978b56b9-tpkzf_calico-apiserver(d0a25691-16ce-4727-a17e-3adde345074c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8978b56b9-tpkzf_calico-apiserver(d0a25691-16ce-4727-a17e-3adde345074c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8978b56b9-tpkzf" podUID="d0a25691-16ce-4727-a17e-3adde345074c" Sep 13 00:05:37.312608 env[1920]: time="2025-09-13T00:05:37.312534449Z" level=error msg="Failed to destroy network for sandbox \"30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:37.313609 env[1920]: time="2025-09-13T00:05:37.313531134Z" level=error msg="encountered an error cleaning up failed sandbox \"30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:37.313920 env[1920]: time="2025-09-13T00:05:37.313864206Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79ccfb85c9-62fmz,Uid:fea8edb8-d901-4812-96e3-2689d15918fb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:37.314746 kubelet[3053]: E0913 00:05:37.314433 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:37.314746 kubelet[3053]: E0913 00:05:37.314559 3053 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79ccfb85c9-62fmz" Sep 13 00:05:37.314746 kubelet[3053]: E0913 00:05:37.314596 3053 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79ccfb85c9-62fmz" Sep 13 00:05:37.315115 kubelet[3053]: E0913 00:05:37.314738 3053 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-79ccfb85c9-62fmz_calico-system(fea8edb8-d901-4812-96e3-2689d15918fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-79ccfb85c9-62fmz_calico-system(fea8edb8-d901-4812-96e3-2689d15918fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-79ccfb85c9-62fmz" podUID="fea8edb8-d901-4812-96e3-2689d15918fb" Sep 13 00:05:37.617251 env[1920]: time="2025-09-13T00:05:37.616892593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cdkj7,Uid:b0810e85-a772-4d04-a431-1cf9fa4bc8f8,Namespace:calico-system,Attempt:0,}" Sep 13 00:05:37.731183 env[1920]: time="2025-09-13T00:05:37.731107446Z" level=error msg="Failed to destroy network for sandbox \"8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:37.732091 env[1920]: time="2025-09-13T00:05:37.732007939Z" level=error msg="encountered an error cleaning up failed sandbox \"8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:37.732366 env[1920]: time="2025-09-13T00:05:37.732281431Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cdkj7,Uid:b0810e85-a772-4d04-a431-1cf9fa4bc8f8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:37.733444 kubelet[3053]: E0913 00:05:37.732847 3053 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:37.733444 kubelet[3053]: E0913 00:05:37.732941 3053 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cdkj7" Sep 13 00:05:37.733444 kubelet[3053]: E0913 00:05:37.732997 3053 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cdkj7" Sep 13 00:05:37.733990 kubelet[3053]: E0913 00:05:37.733057 3053 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cdkj7_calico-system(b0810e85-a772-4d04-a431-1cf9fa4bc8f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cdkj7_calico-system(b0810e85-a772-4d04-a431-1cf9fa4bc8f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cdkj7" podUID="b0810e85-a772-4d04-a431-1cf9fa4bc8f8" Sep 13 00:05:37.822226 kubelet[3053]: I0913 00:05:37.822163 3053 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" Sep 13 00:05:37.826283 kubelet[3053]: I0913 00:05:37.826237 3053 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" Sep 13 00:05:37.826917 env[1920]: time="2025-09-13T00:05:37.825945704Z" level=info msg="StopPodSandbox for \"9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323\"" Sep 13 00:05:37.829323 env[1920]: time="2025-09-13T00:05:37.829209226Z" level=info msg="StopPodSandbox for \"7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec\"" Sep 13 00:05:37.835467 kubelet[3053]: I0913 00:05:37.833477 3053 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" Sep 13 00:05:37.837663 env[1920]: time="2025-09-13T00:05:37.837609581Z" level=info msg="StopPodSandbox for \"0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0\"" Sep 13 00:05:37.839371 kubelet[3053]: I0913 00:05:37.838744 3053 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" Sep 13 00:05:37.842901 env[1920]: time="2025-09-13T00:05:37.842823489Z" level=info msg="StopPodSandbox for \"dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b\"" Sep 13 00:05:37.848304 kubelet[3053]: I0913 00:05:37.846417 3053 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" Sep 13 00:05:37.849493 env[1920]: time="2025-09-13T00:05:37.849403430Z" level=info msg="StopPodSandbox for \"8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85\"" Sep 13 00:05:37.853419 kubelet[3053]: I0913 00:05:37.853353 3053 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" Sep 13 00:05:37.856612 env[1920]: time="2025-09-13T00:05:37.856497463Z" level=info msg="StopPodSandbox for \"dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec\"" Sep 13 00:05:37.860817 kubelet[3053]: I0913 00:05:37.859417 3053 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" Sep 13 00:05:37.861800 env[1920]: time="2025-09-13T00:05:37.861679151Z" level=info msg="StopPodSandbox for \"30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4\"" Sep 13 00:05:37.866834 kubelet[3053]: I0913 00:05:37.866658 3053 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" Sep 13 00:05:37.870978 env[1920]: time="2025-09-13T00:05:37.868863413Z" level=info msg="StopPodSandbox for \"b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00\"" Sep 13 00:05:38.042742 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4-shm.mount: Deactivated successfully. Sep 13 00:05:38.043118 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0-shm.mount: Deactivated successfully. Sep 13 00:05:38.100989 env[1920]: time="2025-09-13T00:05:38.100879328Z" level=error msg="StopPodSandbox for \"7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec\" failed" error="failed to destroy network for sandbox \"7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:38.103033 kubelet[3053]: E0913 00:05:38.102481 3053 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" Sep 13 00:05:38.103033 kubelet[3053]: E0913 00:05:38.102723 3053 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec"} Sep 13 00:05:38.103033 kubelet[3053]: E0913 00:05:38.102897 3053 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"527909b5-007b-4b06-af5d-92e50eb61126\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:05:38.103033 kubelet[3053]: E0913 00:05:38.102943 3053 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"527909b5-007b-4b06-af5d-92e50eb61126\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8978b56b9-mstsf" podUID="527909b5-007b-4b06-af5d-92e50eb61126" Sep 13 00:05:38.107553 env[1920]: time="2025-09-13T00:05:38.107317201Z" level=error msg="StopPodSandbox for \"dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b\" failed" error="failed to destroy network for sandbox \"dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:38.108638 kubelet[3053]: E0913 00:05:38.108172 3053 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" Sep 13 00:05:38.108638 kubelet[3053]: E0913 00:05:38.108278 3053 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b"} Sep 13 00:05:38.108638 kubelet[3053]: E0913 00:05:38.108446 3053 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dce9497b-6716-4542-9084-aaea79149a75\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:05:38.108638 kubelet[3053]: E0913 00:05:38.108525 3053 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dce9497b-6716-4542-9084-aaea79149a75\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-6v2qn" podUID="dce9497b-6716-4542-9084-aaea79149a75" Sep 13 00:05:38.139402 env[1920]: time="2025-09-13T00:05:38.139197420Z" level=error msg="StopPodSandbox for \"9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323\" failed" error="failed to destroy network for sandbox \"9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:38.140540 kubelet[3053]: E0913 00:05:38.140022 3053 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" Sep 13 00:05:38.140540 kubelet[3053]: E0913 00:05:38.140165 3053 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323"} Sep 13 00:05:38.140540 kubelet[3053]: E0913 00:05:38.140391 3053 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e3f25ad3-33b2-4d46-a48c-0b632c9b8329\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:05:38.140540 kubelet[3053]: E0913 00:05:38.140467 3053 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e3f25ad3-33b2-4d46-a48c-0b632c9b8329\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-frxhj" podUID="e3f25ad3-33b2-4d46-a48c-0b632c9b8329" Sep 13 00:05:38.166904 env[1920]: time="2025-09-13T00:05:38.166805540Z" level=error msg="StopPodSandbox for \"0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0\" failed" error="failed to destroy network for sandbox \"0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:38.167928 kubelet[3053]: E0913 00:05:38.167434 3053 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" Sep 13 00:05:38.167928 kubelet[3053]: E0913 00:05:38.167557 3053 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0"} Sep 13 00:05:38.167928 kubelet[3053]: E0913 00:05:38.167738 3053 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d0a25691-16ce-4727-a17e-3adde345074c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:05:38.167928 kubelet[3053]: E0913 00:05:38.167840 3053 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d0a25691-16ce-4727-a17e-3adde345074c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8978b56b9-tpkzf" podUID="d0a25691-16ce-4727-a17e-3adde345074c" Sep 13 00:05:38.184642 env[1920]: time="2025-09-13T00:05:38.184563213Z" level=error msg="StopPodSandbox for \"8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85\" failed" error="failed to destroy network for sandbox \"8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:38.185637 kubelet[3053]: E0913 00:05:38.185243 3053 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" Sep 13 00:05:38.185637 kubelet[3053]: E0913 00:05:38.185346 3053 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85"} Sep 13 00:05:38.185637 kubelet[3053]: E0913 00:05:38.185442 3053 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b0810e85-a772-4d04-a431-1cf9fa4bc8f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:05:38.185637 kubelet[3053]: E0913 00:05:38.185518 3053 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b0810e85-a772-4d04-a431-1cf9fa4bc8f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cdkj7" podUID="b0810e85-a772-4d04-a431-1cf9fa4bc8f8" Sep 13 00:05:38.204082 env[1920]: time="2025-09-13T00:05:38.203975327Z" level=error msg="StopPodSandbox for \"30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4\" failed" error="failed to destroy network for sandbox \"30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:38.205222 kubelet[3053]: E0913 00:05:38.204697 3053 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" Sep 13 00:05:38.205222 kubelet[3053]: E0913 00:05:38.204803 3053 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4"} Sep 13 00:05:38.205222 kubelet[3053]: E0913 00:05:38.204891 3053 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fea8edb8-d901-4812-96e3-2689d15918fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:05:38.205222 kubelet[3053]: E0913 00:05:38.204943 3053 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fea8edb8-d901-4812-96e3-2689d15918fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-79ccfb85c9-62fmz" podUID="fea8edb8-d901-4812-96e3-2689d15918fb" Sep 13 00:05:38.206376 env[1920]: time="2025-09-13T00:05:38.206285041Z" level=error msg="StopPodSandbox for \"b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00\" failed" error="failed to destroy network for sandbox \"b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:38.207351 kubelet[3053]: E0913 00:05:38.206982 3053 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" Sep 13 00:05:38.207351 kubelet[3053]: E0913 00:05:38.207130 3053 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00"} Sep 13 00:05:38.207351 kubelet[3053]: E0913 00:05:38.207216 3053 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5015b2dd-c6f7-46e4-9af1-65597994d2b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:05:38.207351 kubelet[3053]: E0913 00:05:38.207268 3053 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5015b2dd-c6f7-46e4-9af1-65597994d2b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b97f6f77f-zv7t8" podUID="5015b2dd-c6f7-46e4-9af1-65597994d2b9" Sep 13 00:05:38.210592 env[1920]: time="2025-09-13T00:05:38.210404884Z" level=error msg="StopPodSandbox for \"dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec\" failed" error="failed to destroy network for sandbox \"dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:05:38.211723 kubelet[3053]: E0913 00:05:38.211170 3053 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" Sep 13 00:05:38.211723 kubelet[3053]: E0913 00:05:38.211270 3053 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec"} Sep 13 00:05:38.211723 kubelet[3053]: E0913 00:05:38.211557 3053 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"42009314-cb57-4945-ba05-c28efb80272b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:05:38.211723 kubelet[3053]: E0913 00:05:38.211629 3053 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"42009314-cb57-4945-ba05-c28efb80272b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-fkdwg" podUID="42009314-cb57-4945-ba05-c28efb80272b" Sep 13 00:05:46.473797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1864734779.mount: Deactivated successfully. Sep 13 00:05:46.542796 env[1920]: time="2025-09-13T00:05:46.542684309Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:46.547856 env[1920]: time="2025-09-13T00:05:46.547753083Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:46.552232 env[1920]: time="2025-09-13T00:05:46.552149233Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:46.556531 env[1920]: time="2025-09-13T00:05:46.556449106Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:46.557977 env[1920]: time="2025-09-13T00:05:46.557918030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 13 00:05:46.597048 env[1920]: time="2025-09-13T00:05:46.596878716Z" level=info msg="CreateContainer within sandbox \"5b0d05ecce5441b63f30fa55c52c4d5e24635954183210cd2c5d3057621f51d3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:05:46.638263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount164222238.mount: Deactivated successfully. Sep 13 00:05:46.647950 env[1920]: time="2025-09-13T00:05:46.647840182Z" level=info msg="CreateContainer within sandbox \"5b0d05ecce5441b63f30fa55c52c4d5e24635954183210cd2c5d3057621f51d3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"dad4fff024c4f5fee0310844c06f44418179385c69ed85739d4c8dcc2f307956\"" Sep 13 00:05:46.649477 env[1920]: time="2025-09-13T00:05:46.649386131Z" level=info msg="StartContainer for \"dad4fff024c4f5fee0310844c06f44418179385c69ed85739d4c8dcc2f307956\"" Sep 13 00:05:46.847641 env[1920]: time="2025-09-13T00:05:46.847575545Z" level=info msg="StartContainer for \"dad4fff024c4f5fee0310844c06f44418179385c69ed85739d4c8dcc2f307956\" returns successfully" Sep 13 00:05:46.944138 kubelet[3053]: I0913 00:05:46.940012 3053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pwhpc" podStartSLOduration=1.8464335360000002 podStartE2EDuration="23.939987204s" podCreationTimestamp="2025-09-13 00:05:23 +0000 UTC" firstStartedPulling="2025-09-13 00:05:24.466652988 +0000 UTC m=+30.183953157" lastFinishedPulling="2025-09-13 00:05:46.560206668 +0000 UTC m=+52.277506825" observedRunningTime="2025-09-13 00:05:46.938476671 +0000 UTC m=+52.655776852" watchObservedRunningTime="2025-09-13 00:05:46.939987204 +0000 UTC m=+52.657287373" Sep 13 00:05:47.273435 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:05:47.273628 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:05:47.561796 env[1920]: time="2025-09-13T00:05:47.561690223Z" level=info msg="StopPodSandbox for \"30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4\"" Sep 13 00:05:47.880837 env[1920]: 2025-09-13 00:05:47.728 [INFO][4304] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" Sep 13 00:05:47.880837 env[1920]: 2025-09-13 00:05:47.729 [INFO][4304] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" iface="eth0" netns="/var/run/netns/cni-9ff1230f-6bcd-b677-928f-e9c7986578c0" Sep 13 00:05:47.880837 env[1920]: 2025-09-13 00:05:47.729 [INFO][4304] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" iface="eth0" netns="/var/run/netns/cni-9ff1230f-6bcd-b677-928f-e9c7986578c0" Sep 13 00:05:47.880837 env[1920]: 2025-09-13 00:05:47.730 [INFO][4304] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" iface="eth0" netns="/var/run/netns/cni-9ff1230f-6bcd-b677-928f-e9c7986578c0" Sep 13 00:05:47.880837 env[1920]: 2025-09-13 00:05:47.730 [INFO][4304] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" Sep 13 00:05:47.880837 env[1920]: 2025-09-13 00:05:47.730 [INFO][4304] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" Sep 13 00:05:47.880837 env[1920]: 2025-09-13 00:05:47.843 [INFO][4311] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" HandleID="k8s-pod-network.30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" Workload="ip--172--31--29--1-k8s-whisker--79ccfb85c9--62fmz-eth0" Sep 13 00:05:47.880837 env[1920]: 2025-09-13 00:05:47.843 [INFO][4311] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:47.880837 env[1920]: 2025-09-13 00:05:47.844 [INFO][4311] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:47.880837 env[1920]: 2025-09-13 00:05:47.861 [WARNING][4311] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" HandleID="k8s-pod-network.30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" Workload="ip--172--31--29--1-k8s-whisker--79ccfb85c9--62fmz-eth0" Sep 13 00:05:47.880837 env[1920]: 2025-09-13 00:05:47.861 [INFO][4311] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" HandleID="k8s-pod-network.30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" Workload="ip--172--31--29--1-k8s-whisker--79ccfb85c9--62fmz-eth0" Sep 13 00:05:47.880837 env[1920]: 2025-09-13 00:05:47.864 [INFO][4311] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:47.880837 env[1920]: 2025-09-13 00:05:47.875 [INFO][4304] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" Sep 13 00:05:47.886594 env[1920]: time="2025-09-13T00:05:47.886283394Z" level=info msg="TearDown network for sandbox \"30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4\" successfully" Sep 13 00:05:47.886594 env[1920]: time="2025-09-13T00:05:47.886372457Z" level=info msg="StopPodSandbox for \"30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4\" returns successfully" Sep 13 00:05:47.891915 systemd[1]: run-netns-cni\x2d9ff1230f\x2d6bcd\x2db677\x2d928f\x2de9c7986578c0.mount: Deactivated successfully. Sep 13 00:05:47.953243 systemd[1]: run-containerd-runc-k8s.io-dad4fff024c4f5fee0310844c06f44418179385c69ed85739d4c8dcc2f307956-runc.QXnHyJ.mount: Deactivated successfully. Sep 13 00:05:48.001820 kubelet[3053]: I0913 00:05:48.001494 3053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mmtf\" (UniqueName: \"kubernetes.io/projected/fea8edb8-d901-4812-96e3-2689d15918fb-kube-api-access-7mmtf\") pod \"fea8edb8-d901-4812-96e3-2689d15918fb\" (UID: \"fea8edb8-d901-4812-96e3-2689d15918fb\") " Sep 13 00:05:48.001820 kubelet[3053]: I0913 00:05:48.001570 3053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fea8edb8-d901-4812-96e3-2689d15918fb-whisker-ca-bundle\") pod \"fea8edb8-d901-4812-96e3-2689d15918fb\" (UID: \"fea8edb8-d901-4812-96e3-2689d15918fb\") " Sep 13 00:05:48.001820 kubelet[3053]: I0913 00:05:48.001622 3053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fea8edb8-d901-4812-96e3-2689d15918fb-whisker-backend-key-pair\") pod \"fea8edb8-d901-4812-96e3-2689d15918fb\" (UID: \"fea8edb8-d901-4812-96e3-2689d15918fb\") " Sep 13 00:05:48.014555 kubelet[3053]: I0913 00:05:48.014499 3053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fea8edb8-d901-4812-96e3-2689d15918fb-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "fea8edb8-d901-4812-96e3-2689d15918fb" (UID: "fea8edb8-d901-4812-96e3-2689d15918fb"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:05:48.018626 systemd[1]: var-lib-kubelet-pods-fea8edb8\x2dd901\x2d4812\x2d96e3\x2d2689d15918fb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7mmtf.mount: Deactivated successfully. Sep 13 00:05:48.028832 kubelet[3053]: I0913 00:05:48.028736 3053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fea8edb8-d901-4812-96e3-2689d15918fb-kube-api-access-7mmtf" (OuterVolumeSpecName: "kube-api-access-7mmtf") pod "fea8edb8-d901-4812-96e3-2689d15918fb" (UID: "fea8edb8-d901-4812-96e3-2689d15918fb"). InnerVolumeSpecName "kube-api-access-7mmtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:05:48.030564 kubelet[3053]: I0913 00:05:48.030400 3053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fea8edb8-d901-4812-96e3-2689d15918fb-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "fea8edb8-d901-4812-96e3-2689d15918fb" (UID: "fea8edb8-d901-4812-96e3-2689d15918fb"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:05:48.102588 kubelet[3053]: I0913 00:05:48.102543 3053 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fea8edb8-d901-4812-96e3-2689d15918fb-whisker-backend-key-pair\") on node \"ip-172-31-29-1\" DevicePath \"\"" Sep 13 00:05:48.102869 kubelet[3053]: I0913 00:05:48.102843 3053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mmtf\" (UniqueName: \"kubernetes.io/projected/fea8edb8-d901-4812-96e3-2689d15918fb-kube-api-access-7mmtf\") on node \"ip-172-31-29-1\" DevicePath \"\"" Sep 13 00:05:48.103037 kubelet[3053]: I0913 00:05:48.103014 3053 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fea8edb8-d901-4812-96e3-2689d15918fb-whisker-ca-bundle\") on node \"ip-172-31-29-1\" DevicePath \"\"" Sep 13 00:05:48.470228 systemd[1]: var-lib-kubelet-pods-fea8edb8\x2dd901\x2d4812\x2d96e3\x2d2689d15918fb-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:05:48.613464 env[1920]: time="2025-09-13T00:05:48.613391855Z" level=info msg="StopPodSandbox for \"9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323\"" Sep 13 00:05:48.779852 env[1920]: 2025-09-13 00:05:48.706 [INFO][4360] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" Sep 13 00:05:48.779852 env[1920]: 2025-09-13 00:05:48.707 [INFO][4360] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" iface="eth0" netns="/var/run/netns/cni-f2c37949-5885-bdd5-67fe-d7d71e52196c" Sep 13 00:05:48.779852 env[1920]: 2025-09-13 00:05:48.708 [INFO][4360] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" iface="eth0" netns="/var/run/netns/cni-f2c37949-5885-bdd5-67fe-d7d71e52196c" Sep 13 00:05:48.779852 env[1920]: 2025-09-13 00:05:48.708 [INFO][4360] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" iface="eth0" netns="/var/run/netns/cni-f2c37949-5885-bdd5-67fe-d7d71e52196c" Sep 13 00:05:48.779852 env[1920]: 2025-09-13 00:05:48.708 [INFO][4360] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" Sep 13 00:05:48.779852 env[1920]: 2025-09-13 00:05:48.708 [INFO][4360] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" Sep 13 00:05:48.779852 env[1920]: 2025-09-13 00:05:48.745 [INFO][4367] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" HandleID="k8s-pod-network.9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" Workload="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--frxhj-eth0" Sep 13 00:05:48.779852 env[1920]: 2025-09-13 00:05:48.745 [INFO][4367] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:48.779852 env[1920]: 2025-09-13 00:05:48.745 [INFO][4367] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:48.779852 env[1920]: 2025-09-13 00:05:48.759 [WARNING][4367] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" HandleID="k8s-pod-network.9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" Workload="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--frxhj-eth0" Sep 13 00:05:48.779852 env[1920]: 2025-09-13 00:05:48.760 [INFO][4367] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" HandleID="k8s-pod-network.9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" Workload="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--frxhj-eth0" Sep 13 00:05:48.779852 env[1920]: 2025-09-13 00:05:48.766 [INFO][4367] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:48.779852 env[1920]: 2025-09-13 00:05:48.769 [INFO][4360] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" Sep 13 00:05:48.779852 env[1920]: time="2025-09-13T00:05:48.778555823Z" level=info msg="TearDown network for sandbox \"9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323\" successfully" Sep 13 00:05:48.779852 env[1920]: time="2025-09-13T00:05:48.778705253Z" level=info msg="StopPodSandbox for \"9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323\" returns successfully" Sep 13 00:05:48.778045 systemd[1]: run-netns-cni\x2df2c37949\x2d5885\x2dbdd5\x2d67fe\x2dd7d71e52196c.mount: Deactivated successfully. Sep 13 00:05:48.782521 env[1920]: time="2025-09-13T00:05:48.782454832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-frxhj,Uid:e3f25ad3-33b2-4d46-a48c-0b632c9b8329,Namespace:kube-system,Attempt:1,}" Sep 13 00:05:49.110509 kubelet[3053]: I0913 00:05:49.110404 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5lfc\" (UniqueName: \"kubernetes.io/projected/4a3166ba-ae51-4f91-9351-f02ac6945348-kube-api-access-n5lfc\") pod \"whisker-656c987b58-8tccj\" (UID: \"4a3166ba-ae51-4f91-9351-f02ac6945348\") " pod="calico-system/whisker-656c987b58-8tccj" Sep 13 00:05:49.111195 kubelet[3053]: I0913 00:05:49.110550 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a3166ba-ae51-4f91-9351-f02ac6945348-whisker-ca-bundle\") pod \"whisker-656c987b58-8tccj\" (UID: \"4a3166ba-ae51-4f91-9351-f02ac6945348\") " pod="calico-system/whisker-656c987b58-8tccj" Sep 13 00:05:49.111195 kubelet[3053]: I0913 00:05:49.110623 3053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4a3166ba-ae51-4f91-9351-f02ac6945348-whisker-backend-key-pair\") pod \"whisker-656c987b58-8tccj\" (UID: \"4a3166ba-ae51-4f91-9351-f02ac6945348\") " pod="calico-system/whisker-656c987b58-8tccj" Sep 13 00:05:49.189312 (udev-worker)[4281]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:05:49.190747 systemd-networkd[1599]: cali35ddc4d6510: Link UP Sep 13 00:05:49.207641 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:05:49.207997 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali35ddc4d6510: link becomes ready Sep 13 00:05:49.206551 systemd-networkd[1599]: cali35ddc4d6510: Gained carrier Sep 13 00:05:49.278000 audit[4428]: AVC avc: denied { write } for pid=4428 comm="tee" name="fd" dev="proc" ino=22871 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:05:49.278000 audit[4428]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc294a7eb a2=241 a3=1b6 items=1 ppid=4401 pid=4428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:49.305829 kernel: audit: type=1400 audit(1757721949.278:295): avc: denied { write } for pid=4428 comm="tee" name="fd" dev="proc" ino=22871 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:05:49.305998 kernel: audit: type=1300 audit(1757721949.278:295): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc294a7eb a2=241 a3=1b6 items=1 ppid=4401 pid=4428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:49.310921 env[1920]: 2025-09-13 00:05:48.860 [INFO][4374] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:05:49.310921 env[1920]: 2025-09-13 00:05:48.882 [INFO][4374] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--1-k8s-coredns--7c65d6cfc9--frxhj-eth0 coredns-7c65d6cfc9- kube-system e3f25ad3-33b2-4d46-a48c-0b632c9b8329 945 0 2025-09-13 00:04:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-1 coredns-7c65d6cfc9-frxhj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali35ddc4d6510 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="54232dc65e0fbb11a984179c1a2c1ae8fe68c3ee2560e395b3760a4a07d24f27" Namespace="kube-system" Pod="coredns-7c65d6cfc9-frxhj" WorkloadEndpoint="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--frxhj-" Sep 13 00:05:49.310921 env[1920]: 2025-09-13 00:05:48.883 [INFO][4374] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="54232dc65e0fbb11a984179c1a2c1ae8fe68c3ee2560e395b3760a4a07d24f27" Namespace="kube-system" Pod="coredns-7c65d6cfc9-frxhj" WorkloadEndpoint="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--frxhj-eth0" Sep 13 00:05:49.310921 env[1920]: 2025-09-13 00:05:49.025 [INFO][4386] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="54232dc65e0fbb11a984179c1a2c1ae8fe68c3ee2560e395b3760a4a07d24f27" HandleID="k8s-pod-network.54232dc65e0fbb11a984179c1a2c1ae8fe68c3ee2560e395b3760a4a07d24f27" Workload="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--frxhj-eth0" Sep 13 00:05:49.310921 env[1920]: 2025-09-13 00:05:49.025 [INFO][4386] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="54232dc65e0fbb11a984179c1a2c1ae8fe68c3ee2560e395b3760a4a07d24f27" HandleID="k8s-pod-network.54232dc65e0fbb11a984179c1a2c1ae8fe68c3ee2560e395b3760a4a07d24f27" Workload="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--frxhj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb6c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-1", "pod":"coredns-7c65d6cfc9-frxhj", "timestamp":"2025-09-13 00:05:49.010817487 +0000 UTC"}, Hostname:"ip-172-31-29-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:05:49.310921 env[1920]: 2025-09-13 00:05:49.025 [INFO][4386] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:49.310921 env[1920]: 2025-09-13 00:05:49.025 [INFO][4386] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:49.310921 env[1920]: 2025-09-13 00:05:49.025 [INFO][4386] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-1' Sep 13 00:05:49.310921 env[1920]: 2025-09-13 00:05:49.056 [INFO][4386] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.54232dc65e0fbb11a984179c1a2c1ae8fe68c3ee2560e395b3760a4a07d24f27" host="ip-172-31-29-1" Sep 13 00:05:49.310921 env[1920]: 2025-09-13 00:05:49.084 [INFO][4386] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-1" Sep 13 00:05:49.310921 env[1920]: 2025-09-13 00:05:49.108 [INFO][4386] ipam/ipam.go 511: Trying affinity for 192.168.50.64/26 host="ip-172-31-29-1" Sep 13 00:05:49.310921 env[1920]: 2025-09-13 00:05:49.117 [INFO][4386] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.64/26 host="ip-172-31-29-1" Sep 13 00:05:49.310921 env[1920]: 2025-09-13 00:05:49.132 [INFO][4386] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.64/26 host="ip-172-31-29-1" Sep 13 00:05:49.310921 env[1920]: 2025-09-13 00:05:49.132 [INFO][4386] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.64/26 handle="k8s-pod-network.54232dc65e0fbb11a984179c1a2c1ae8fe68c3ee2560e395b3760a4a07d24f27" host="ip-172-31-29-1" Sep 13 00:05:49.310921 env[1920]: 2025-09-13 00:05:49.135 [INFO][4386] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.54232dc65e0fbb11a984179c1a2c1ae8fe68c3ee2560e395b3760a4a07d24f27 Sep 13 00:05:49.310921 env[1920]: 2025-09-13 00:05:49.143 [INFO][4386] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.64/26 handle="k8s-pod-network.54232dc65e0fbb11a984179c1a2c1ae8fe68c3ee2560e395b3760a4a07d24f27" host="ip-172-31-29-1" Sep 13 00:05:49.310921 env[1920]: 2025-09-13 00:05:49.155 [INFO][4386] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.65/26] block=192.168.50.64/26 handle="k8s-pod-network.54232dc65e0fbb11a984179c1a2c1ae8fe68c3ee2560e395b3760a4a07d24f27" host="ip-172-31-29-1" Sep 13 00:05:49.310921 env[1920]: 2025-09-13 00:05:49.155 [INFO][4386] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.65/26] handle="k8s-pod-network.54232dc65e0fbb11a984179c1a2c1ae8fe68c3ee2560e395b3760a4a07d24f27" host="ip-172-31-29-1" Sep 13 00:05:49.310921 env[1920]: 2025-09-13 00:05:49.156 [INFO][4386] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:49.310921 env[1920]: 2025-09-13 00:05:49.156 [INFO][4386] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.65/26] IPv6=[] ContainerID="54232dc65e0fbb11a984179c1a2c1ae8fe68c3ee2560e395b3760a4a07d24f27" HandleID="k8s-pod-network.54232dc65e0fbb11a984179c1a2c1ae8fe68c3ee2560e395b3760a4a07d24f27" Workload="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--frxhj-eth0" Sep 13 00:05:49.312410 env[1920]: 2025-09-13 00:05:49.159 [INFO][4374] cni-plugin/k8s.go 418: Populated endpoint ContainerID="54232dc65e0fbb11a984179c1a2c1ae8fe68c3ee2560e395b3760a4a07d24f27" Namespace="kube-system" Pod="coredns-7c65d6cfc9-frxhj" WorkloadEndpoint="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--frxhj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--1-k8s-coredns--7c65d6cfc9--frxhj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"e3f25ad3-33b2-4d46-a48c-0b632c9b8329", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 4, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-1", ContainerID:"", Pod:"coredns-7c65d6cfc9-frxhj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35ddc4d6510", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:49.312410 env[1920]: 2025-09-13 00:05:49.159 [INFO][4374] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.65/32] ContainerID="54232dc65e0fbb11a984179c1a2c1ae8fe68c3ee2560e395b3760a4a07d24f27" Namespace="kube-system" Pod="coredns-7c65d6cfc9-frxhj" WorkloadEndpoint="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--frxhj-eth0" Sep 13 00:05:49.312410 env[1920]: 2025-09-13 00:05:49.160 [INFO][4374] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali35ddc4d6510 ContainerID="54232dc65e0fbb11a984179c1a2c1ae8fe68c3ee2560e395b3760a4a07d24f27" Namespace="kube-system" Pod="coredns-7c65d6cfc9-frxhj" WorkloadEndpoint="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--frxhj-eth0" Sep 13 00:05:49.312410 env[1920]: 2025-09-13 00:05:49.223 [INFO][4374] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="54232dc65e0fbb11a984179c1a2c1ae8fe68c3ee2560e395b3760a4a07d24f27" Namespace="kube-system" Pod="coredns-7c65d6cfc9-frxhj" WorkloadEndpoint="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--frxhj-eth0" Sep 13 00:05:49.312410 env[1920]: 2025-09-13 00:05:49.233 [INFO][4374] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="54232dc65e0fbb11a984179c1a2c1ae8fe68c3ee2560e395b3760a4a07d24f27" Namespace="kube-system" Pod="coredns-7c65d6cfc9-frxhj" WorkloadEndpoint="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--frxhj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--1-k8s-coredns--7c65d6cfc9--frxhj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"e3f25ad3-33b2-4d46-a48c-0b632c9b8329", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 4, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-1", ContainerID:"54232dc65e0fbb11a984179c1a2c1ae8fe68c3ee2560e395b3760a4a07d24f27", Pod:"coredns-7c65d6cfc9-frxhj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35ddc4d6510", MAC:"3a:58:c6:6a:39:28", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:49.312410 env[1920]: 2025-09-13 00:05:49.290 [INFO][4374] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="54232dc65e0fbb11a984179c1a2c1ae8fe68c3ee2560e395b3760a4a07d24f27" Namespace="kube-system" Pod="coredns-7c65d6cfc9-frxhj" WorkloadEndpoint="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--frxhj-eth0" Sep 13 00:05:49.278000 audit: CWD cwd="/etc/service/enabled/confd/log" Sep 13 00:05:49.327436 kernel: audit: type=1307 audit(1757721949.278:295): cwd="/etc/service/enabled/confd/log" Sep 13 00:05:49.278000 audit: PATH item=0 name="/dev/fd/63" inode=21794 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:05:49.339714 kernel: audit: type=1302 audit(1757721949.278:295): item=0 name="/dev/fd/63" inode=21794 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:05:49.278000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:05:49.353088 kernel: audit: type=1327 audit(1757721949.278:295): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:05:49.364615 env[1920]: time="2025-09-13T00:05:49.363198871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:49.364615 env[1920]: time="2025-09-13T00:05:49.363309692Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:49.364615 env[1920]: time="2025-09-13T00:05:49.363337667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:49.367208 env[1920]: time="2025-09-13T00:05:49.367151814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-656c987b58-8tccj,Uid:4a3166ba-ae51-4f91-9351-f02ac6945348,Namespace:calico-system,Attempt:0,}" Sep 13 00:05:49.367705 env[1920]: time="2025-09-13T00:05:49.367144733Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/54232dc65e0fbb11a984179c1a2c1ae8fe68c3ee2560e395b3760a4a07d24f27 pid=4474 runtime=io.containerd.runc.v2 Sep 13 00:05:49.358000 audit[4447]: AVC avc: denied { write } for pid=4447 comm="tee" name="fd" dev="proc" ino=22908 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:05:49.375960 kernel: audit: type=1400 audit(1757721949.358:296): avc: denied { write } for pid=4447 comm="tee" name="fd" dev="proc" ino=22908 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:05:49.358000 audit[4447]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe44327ec a2=241 a3=1b6 items=1 ppid=4408 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:49.406693 kernel: audit: type=1300 audit(1757721949.358:296): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe44327ec a2=241 a3=1b6 items=1 ppid=4408 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:49.358000 audit: CWD cwd="/etc/service/enabled/bird/log" Sep 13 00:05:49.435150 kernel: audit: type=1307 audit(1757721949.358:296): cwd="/etc/service/enabled/bird/log" Sep 13 00:05:49.447807 kernel: audit: type=1302 audit(1757721949.358:296): item=0 name="/dev/fd/63" inode=22883 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:05:49.358000 audit: PATH item=0 name="/dev/fd/63" inode=22883 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:05:49.358000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:05:49.459346 kernel: audit: type=1327 audit(1757721949.358:296): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:05:49.370000 audit[4468]: AVC avc: denied { write } for pid=4468 comm="tee" name="fd" dev="proc" ino=22914 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:05:49.370000 audit[4468]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffff9f07dc a2=241 a3=1b6 items=1 ppid=4406 pid=4468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:49.370000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Sep 13 00:05:49.370000 audit: PATH item=0 name="/dev/fd/63" inode=22900 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:05:49.370000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:05:49.412000 audit[4481]: AVC avc: denied { write } for pid=4481 comm="tee" name="fd" dev="proc" ino=22923 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:05:49.412000 audit[4481]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe85947eb a2=241 a3=1b6 items=1 ppid=4414 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:49.412000 audit: CWD cwd="/etc/service/enabled/felix/log" Sep 13 00:05:49.412000 audit: PATH item=0 name="/dev/fd/63" inode=21825 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:05:49.412000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:05:49.419000 audit[4493]: AVC avc: denied { write } for pid=4493 comm="tee" name="fd" dev="proc" ino=22947 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:05:49.419000 audit[4493]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc9a927ed a2=241 a3=1b6 items=1 ppid=4420 pid=4493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:49.419000 audit: CWD cwd="/etc/service/enabled/cni/log" Sep 13 00:05:49.419000 audit: PATH item=0 name="/dev/fd/63" inode=22916 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:05:49.419000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:05:49.480000 audit[4501]: AVC avc: denied { write } for pid=4501 comm="tee" name="fd" dev="proc" ino=22961 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:05:49.480000 audit[4501]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd7a787db a2=241 a3=1b6 items=1 ppid=4405 pid=4501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:49.480000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Sep 13 00:05:49.480000 audit: PATH item=0 name="/dev/fd/63" inode=21836 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:05:49.480000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:05:49.489000 audit[4503]: AVC avc: denied { write } for pid=4503 comm="tee" name="fd" dev="proc" ino=21857 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:05:49.489000 audit[4503]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff365b7eb a2=241 a3=1b6 items=1 ppid=4402 pid=4503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:49.489000 audit: CWD cwd="/etc/service/enabled/bird6/log" Sep 13 00:05:49.489000 audit: PATH item=0 name="/dev/fd/63" inode=21837 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:05:49.489000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:05:49.622987 env[1920]: time="2025-09-13T00:05:49.622827195Z" level=info msg="StopPodSandbox for \"7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec\"" Sep 13 00:05:49.626232 env[1920]: time="2025-09-13T00:05:49.625130272Z" level=info msg="StopPodSandbox for \"0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0\"" Sep 13 00:05:49.786359 env[1920]: time="2025-09-13T00:05:49.786293538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-frxhj,Uid:e3f25ad3-33b2-4d46-a48c-0b632c9b8329,Namespace:kube-system,Attempt:1,} returns sandbox id \"54232dc65e0fbb11a984179c1a2c1ae8fe68c3ee2560e395b3760a4a07d24f27\"" Sep 13 00:05:49.807671 env[1920]: time="2025-09-13T00:05:49.807252460Z" level=info msg="CreateContainer within sandbox \"54232dc65e0fbb11a984179c1a2c1ae8fe68c3ee2560e395b3760a4a07d24f27\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:05:50.004922 env[1920]: time="2025-09-13T00:05:50.003340589Z" level=info msg="CreateContainer within sandbox \"54232dc65e0fbb11a984179c1a2c1ae8fe68c3ee2560e395b3760a4a07d24f27\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3a591e37f7ae4b99f81ef3c83224ee0afc8503c202a0a50b2a9cb0510ef9ee80\"" Sep 13 00:05:50.011691 env[1920]: time="2025-09-13T00:05:50.011615383Z" level=info msg="StartContainer for \"3a591e37f7ae4b99f81ef3c83224ee0afc8503c202a0a50b2a9cb0510ef9ee80\"" Sep 13 00:05:50.410428 env[1920]: time="2025-09-13T00:05:50.410353704Z" level=info msg="StartContainer for \"3a591e37f7ae4b99f81ef3c83224ee0afc8503c202a0a50b2a9cb0510ef9ee80\" returns successfully" Sep 13 00:05:50.622645 env[1920]: time="2025-09-13T00:05:50.620389748Z" level=info msg="StopPodSandbox for \"dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec\"" Sep 13 00:05:50.623501 env[1920]: time="2025-09-13T00:05:50.623423723Z" level=info msg="StopPodSandbox for \"8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85\"" Sep 13 00:05:50.628998 kubelet[3053]: I0913 00:05:50.625391 3053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fea8edb8-d901-4812-96e3-2689d15918fb" path="/var/lib/kubelet/pods/fea8edb8-d901-4812-96e3-2689d15918fb/volumes" Sep 13 00:05:50.738719 (udev-worker)[4434]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:05:50.740254 systemd-networkd[1599]: cali5bf21a77f93: Link UP Sep 13 00:05:50.758600 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:05:50.758743 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5bf21a77f93: link becomes ready Sep 13 00:05:50.759022 systemd-networkd[1599]: cali5bf21a77f93: Gained carrier Sep 13 00:05:50.813095 env[1920]: 2025-09-13 00:05:49.686 [INFO][4508] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:05:50.813095 env[1920]: 2025-09-13 00:05:49.850 [INFO][4508] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--1-k8s-whisker--656c987b58--8tccj-eth0 whisker-656c987b58- calico-system 4a3166ba-ae51-4f91-9351-f02ac6945348 959 0 2025-09-13 00:05:48 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:656c987b58 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-29-1 whisker-656c987b58-8tccj eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5bf21a77f93 [] [] }} ContainerID="ff48d1fadd4a3e58e0bdebb1872e609b966d2148fb4b04c0efc894f4d0161838" Namespace="calico-system" Pod="whisker-656c987b58-8tccj" WorkloadEndpoint="ip--172--31--29--1-k8s-whisker--656c987b58--8tccj-" Sep 13 00:05:50.813095 env[1920]: 2025-09-13 00:05:49.850 [INFO][4508] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ff48d1fadd4a3e58e0bdebb1872e609b966d2148fb4b04c0efc894f4d0161838" Namespace="calico-system" Pod="whisker-656c987b58-8tccj" WorkloadEndpoint="ip--172--31--29--1-k8s-whisker--656c987b58--8tccj-eth0" Sep 13 00:05:50.813095 env[1920]: 2025-09-13 00:05:50.408 [INFO][4572] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff48d1fadd4a3e58e0bdebb1872e609b966d2148fb4b04c0efc894f4d0161838" HandleID="k8s-pod-network.ff48d1fadd4a3e58e0bdebb1872e609b966d2148fb4b04c0efc894f4d0161838" Workload="ip--172--31--29--1-k8s-whisker--656c987b58--8tccj-eth0" Sep 13 00:05:50.813095 env[1920]: 2025-09-13 00:05:50.409 [INFO][4572] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ff48d1fadd4a3e58e0bdebb1872e609b966d2148fb4b04c0efc894f4d0161838" HandleID="k8s-pod-network.ff48d1fadd4a3e58e0bdebb1872e609b966d2148fb4b04c0efc894f4d0161838" Workload="ip--172--31--29--1-k8s-whisker--656c987b58--8tccj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000122470), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-1", "pod":"whisker-656c987b58-8tccj", "timestamp":"2025-09-13 00:05:50.408666387 +0000 UTC"}, Hostname:"ip-172-31-29-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:05:50.813095 env[1920]: 2025-09-13 00:05:50.417 [INFO][4572] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:50.813095 env[1920]: 2025-09-13 00:05:50.448 [INFO][4572] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:50.813095 env[1920]: 2025-09-13 00:05:50.448 [INFO][4572] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-1' Sep 13 00:05:50.813095 env[1920]: 2025-09-13 00:05:50.524 [INFO][4572] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ff48d1fadd4a3e58e0bdebb1872e609b966d2148fb4b04c0efc894f4d0161838" host="ip-172-31-29-1" Sep 13 00:05:50.813095 env[1920]: 2025-09-13 00:05:50.552 [INFO][4572] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-1" Sep 13 00:05:50.813095 env[1920]: 2025-09-13 00:05:50.604 [INFO][4572] ipam/ipam.go 511: Trying affinity for 192.168.50.64/26 host="ip-172-31-29-1" Sep 13 00:05:50.813095 env[1920]: 2025-09-13 00:05:50.630 [INFO][4572] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.64/26 host="ip-172-31-29-1" Sep 13 00:05:50.813095 env[1920]: 2025-09-13 00:05:50.641 [INFO][4572] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.64/26 host="ip-172-31-29-1" Sep 13 00:05:50.813095 env[1920]: 2025-09-13 00:05:50.641 [INFO][4572] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.64/26 handle="k8s-pod-network.ff48d1fadd4a3e58e0bdebb1872e609b966d2148fb4b04c0efc894f4d0161838" host="ip-172-31-29-1" Sep 13 00:05:50.813095 env[1920]: 2025-09-13 00:05:50.659 [INFO][4572] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ff48d1fadd4a3e58e0bdebb1872e609b966d2148fb4b04c0efc894f4d0161838 Sep 13 00:05:50.813095 env[1920]: 2025-09-13 00:05:50.677 [INFO][4572] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.64/26 handle="k8s-pod-network.ff48d1fadd4a3e58e0bdebb1872e609b966d2148fb4b04c0efc894f4d0161838" host="ip-172-31-29-1" Sep 13 00:05:50.813095 env[1920]: 2025-09-13 00:05:50.691 [INFO][4572] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.66/26] block=192.168.50.64/26 handle="k8s-pod-network.ff48d1fadd4a3e58e0bdebb1872e609b966d2148fb4b04c0efc894f4d0161838" host="ip-172-31-29-1" Sep 13 00:05:50.813095 env[1920]: 2025-09-13 00:05:50.691 [INFO][4572] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.66/26] handle="k8s-pod-network.ff48d1fadd4a3e58e0bdebb1872e609b966d2148fb4b04c0efc894f4d0161838" host="ip-172-31-29-1" Sep 13 00:05:50.813095 env[1920]: 2025-09-13 00:05:50.691 [INFO][4572] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:50.813095 env[1920]: 2025-09-13 00:05:50.691 [INFO][4572] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.66/26] IPv6=[] ContainerID="ff48d1fadd4a3e58e0bdebb1872e609b966d2148fb4b04c0efc894f4d0161838" HandleID="k8s-pod-network.ff48d1fadd4a3e58e0bdebb1872e609b966d2148fb4b04c0efc894f4d0161838" Workload="ip--172--31--29--1-k8s-whisker--656c987b58--8tccj-eth0" Sep 13 00:05:50.817377 env[1920]: 2025-09-13 00:05:50.707 [INFO][4508] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ff48d1fadd4a3e58e0bdebb1872e609b966d2148fb4b04c0efc894f4d0161838" Namespace="calico-system" Pod="whisker-656c987b58-8tccj" WorkloadEndpoint="ip--172--31--29--1-k8s-whisker--656c987b58--8tccj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--1-k8s-whisker--656c987b58--8tccj-eth0", GenerateName:"whisker-656c987b58-", Namespace:"calico-system", SelfLink:"", UID:"4a3166ba-ae51-4f91-9351-f02ac6945348", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"656c987b58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-1", ContainerID:"", Pod:"whisker-656c987b58-8tccj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.50.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5bf21a77f93", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:50.817377 env[1920]: 2025-09-13 00:05:50.707 [INFO][4508] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.66/32] ContainerID="ff48d1fadd4a3e58e0bdebb1872e609b966d2148fb4b04c0efc894f4d0161838" Namespace="calico-system" Pod="whisker-656c987b58-8tccj" WorkloadEndpoint="ip--172--31--29--1-k8s-whisker--656c987b58--8tccj-eth0" Sep 13 00:05:50.817377 env[1920]: 2025-09-13 00:05:50.707 [INFO][4508] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5bf21a77f93 ContainerID="ff48d1fadd4a3e58e0bdebb1872e609b966d2148fb4b04c0efc894f4d0161838" Namespace="calico-system" Pod="whisker-656c987b58-8tccj" WorkloadEndpoint="ip--172--31--29--1-k8s-whisker--656c987b58--8tccj-eth0" Sep 13 00:05:50.817377 env[1920]: 2025-09-13 00:05:50.761 [INFO][4508] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff48d1fadd4a3e58e0bdebb1872e609b966d2148fb4b04c0efc894f4d0161838" Namespace="calico-system" Pod="whisker-656c987b58-8tccj" WorkloadEndpoint="ip--172--31--29--1-k8s-whisker--656c987b58--8tccj-eth0" Sep 13 00:05:50.817377 env[1920]: 2025-09-13 00:05:50.763 [INFO][4508] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ff48d1fadd4a3e58e0bdebb1872e609b966d2148fb4b04c0efc894f4d0161838" Namespace="calico-system" Pod="whisker-656c987b58-8tccj" WorkloadEndpoint="ip--172--31--29--1-k8s-whisker--656c987b58--8tccj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--1-k8s-whisker--656c987b58--8tccj-eth0", GenerateName:"whisker-656c987b58-", Namespace:"calico-system", SelfLink:"", UID:"4a3166ba-ae51-4f91-9351-f02ac6945348", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"656c987b58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-1", ContainerID:"ff48d1fadd4a3e58e0bdebb1872e609b966d2148fb4b04c0efc894f4d0161838", Pod:"whisker-656c987b58-8tccj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.50.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5bf21a77f93", MAC:"ba:51:1a:c8:b4:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:50.817377 env[1920]: 2025-09-13 00:05:50.808 [INFO][4508] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ff48d1fadd4a3e58e0bdebb1872e609b966d2148fb4b04c0efc894f4d0161838" Namespace="calico-system" Pod="whisker-656c987b58-8tccj" WorkloadEndpoint="ip--172--31--29--1-k8s-whisker--656c987b58--8tccj-eth0" Sep 13 00:05:50.822243 env[1920]: 2025-09-13 00:05:50.052 [INFO][4555] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" Sep 13 00:05:50.822243 env[1920]: 2025-09-13 00:05:50.057 [INFO][4555] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" iface="eth0" netns="/var/run/netns/cni-fab9b13e-9106-53cd-11ba-a2d2161d75a5" Sep 13 00:05:50.822243 env[1920]: 2025-09-13 00:05:50.057 [INFO][4555] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" iface="eth0" netns="/var/run/netns/cni-fab9b13e-9106-53cd-11ba-a2d2161d75a5" Sep 13 00:05:50.822243 env[1920]: 2025-09-13 00:05:50.058 [INFO][4555] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" iface="eth0" netns="/var/run/netns/cni-fab9b13e-9106-53cd-11ba-a2d2161d75a5" Sep 13 00:05:50.822243 env[1920]: 2025-09-13 00:05:50.058 [INFO][4555] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" Sep 13 00:05:50.822243 env[1920]: 2025-09-13 00:05:50.058 [INFO][4555] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" Sep 13 00:05:50.822243 env[1920]: 2025-09-13 00:05:50.504 [INFO][4585] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" HandleID="k8s-pod-network.0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" Workload="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--tpkzf-eth0" Sep 13 00:05:50.822243 env[1920]: 2025-09-13 00:05:50.504 [INFO][4585] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:50.822243 env[1920]: 2025-09-13 00:05:50.692 [INFO][4585] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:50.822243 env[1920]: 2025-09-13 00:05:50.776 [WARNING][4585] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" HandleID="k8s-pod-network.0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" Workload="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--tpkzf-eth0" Sep 13 00:05:50.822243 env[1920]: 2025-09-13 00:05:50.776 [INFO][4585] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" HandleID="k8s-pod-network.0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" Workload="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--tpkzf-eth0" Sep 13 00:05:50.822243 env[1920]: 2025-09-13 00:05:50.804 [INFO][4585] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:50.822243 env[1920]: 2025-09-13 00:05:50.819 [INFO][4555] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" Sep 13 00:05:50.832631 systemd[1]: run-netns-cni\x2dfab9b13e\x2d9106\x2d53cd\x2d11ba\x2da2d2161d75a5.mount: Deactivated successfully. Sep 13 00:05:50.841443 env[1920]: time="2025-09-13T00:05:50.841167977Z" level=info msg="TearDown network for sandbox \"0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0\" successfully" Sep 13 00:05:50.841796 env[1920]: time="2025-09-13T00:05:50.841699096Z" level=info msg="StopPodSandbox for \"0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0\" returns successfully" Sep 13 00:05:50.850348 env[1920]: time="2025-09-13T00:05:50.850253749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8978b56b9-tpkzf,Uid:d0a25691-16ce-4727-a17e-3adde345074c,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:05:51.077615 kubelet[3053]: I0913 00:05:51.070580 3053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-frxhj" podStartSLOduration=53.070560534 podStartE2EDuration="53.070560534s" podCreationTimestamp="2025-09-13 00:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:05:51.014699291 +0000 UTC m=+56.731999484" watchObservedRunningTime="2025-09-13 00:05:51.070560534 +0000 UTC m=+56.787860703" Sep 13 00:05:51.083984 systemd-networkd[1599]: cali35ddc4d6510: Gained IPv6LL Sep 13 00:05:51.133000 audit[4699]: NETFILTER_CFG table=filter:101 family=2 entries=20 op=nft_register_rule pid=4699 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:05:51.133000 audit[4699]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe72ef6a0 a2=0 a3=1 items=0 ppid=3158 pid=4699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.133000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:05:51.147000 audit[4699]: NETFILTER_CFG table=nat:102 family=2 entries=14 op=nft_register_rule pid=4699 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:05:51.147000 audit[4699]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffe72ef6a0 a2=0 a3=1 items=0 ppid=3158 pid=4699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.147000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:05:51.164475 env[1920]: time="2025-09-13T00:05:51.164323088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:51.164628 env[1920]: time="2025-09-13T00:05:51.164501067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:51.164628 env[1920]: time="2025-09-13T00:05:51.164589013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:51.182466 env[1920]: 2025-09-13 00:05:50.576 [INFO][4564] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" Sep 13 00:05:51.182466 env[1920]: 2025-09-13 00:05:50.576 [INFO][4564] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" iface="eth0" netns="/var/run/netns/cni-92c5b39f-977f-3c52-a1df-13ef5b803a95" Sep 13 00:05:51.182466 env[1920]: 2025-09-13 00:05:50.576 [INFO][4564] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" iface="eth0" netns="/var/run/netns/cni-92c5b39f-977f-3c52-a1df-13ef5b803a95" Sep 13 00:05:51.182466 env[1920]: 2025-09-13 00:05:50.577 [INFO][4564] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" iface="eth0" netns="/var/run/netns/cni-92c5b39f-977f-3c52-a1df-13ef5b803a95" Sep 13 00:05:51.182466 env[1920]: 2025-09-13 00:05:50.577 [INFO][4564] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" Sep 13 00:05:51.182466 env[1920]: 2025-09-13 00:05:50.577 [INFO][4564] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" Sep 13 00:05:51.182466 env[1920]: 2025-09-13 00:05:51.080 [INFO][4623] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" HandleID="k8s-pod-network.7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" Workload="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--mstsf-eth0" Sep 13 00:05:51.182466 env[1920]: 2025-09-13 00:05:51.085 [INFO][4623] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:51.182466 env[1920]: 2025-09-13 00:05:51.101 [INFO][4623] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:51.182466 env[1920]: 2025-09-13 00:05:51.163 [WARNING][4623] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" HandleID="k8s-pod-network.7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" Workload="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--mstsf-eth0" Sep 13 00:05:51.182466 env[1920]: 2025-09-13 00:05:51.163 [INFO][4623] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" HandleID="k8s-pod-network.7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" Workload="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--mstsf-eth0" Sep 13 00:05:51.182466 env[1920]: 2025-09-13 00:05:51.169 [INFO][4623] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:51.182466 env[1920]: 2025-09-13 00:05:51.174 [INFO][4564] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" Sep 13 00:05:51.190559 systemd[1]: run-netns-cni\x2d92c5b39f\x2d977f\x2d3c52\x2da1df\x2d13ef5b803a95.mount: Deactivated successfully. Sep 13 00:05:51.198721 env[1920]: time="2025-09-13T00:05:51.198573723Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ff48d1fadd4a3e58e0bdebb1872e609b966d2148fb4b04c0efc894f4d0161838 pid=4691 runtime=io.containerd.runc.v2 Sep 13 00:05:51.203000 audit[4715]: NETFILTER_CFG table=filter:103 family=2 entries=17 op=nft_register_rule pid=4715 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:05:51.207521 env[1920]: time="2025-09-13T00:05:51.206205401Z" level=info msg="TearDown network for sandbox \"7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec\" successfully" Sep 13 00:05:51.207521 env[1920]: time="2025-09-13T00:05:51.206276365Z" level=info msg="StopPodSandbox for \"7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec\" returns successfully" Sep 13 00:05:51.203000 audit[4715]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff118b560 a2=0 a3=1 items=0 ppid=3158 pid=4715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.203000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:05:51.211785 env[1920]: time="2025-09-13T00:05:51.210724013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8978b56b9-mstsf,Uid:527909b5-007b-4b06-af5d-92e50eb61126,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:05:51.212000 audit[4715]: NETFILTER_CFG table=nat:104 family=2 entries=35 op=nft_register_chain pid=4715 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:05:51.212000 audit[4715]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=fffff118b560 a2=0 a3=1 items=0 ppid=3158 pid=4715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.212000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:05:51.414000 audit[4762]: AVC avc: denied { bpf } for pid=4762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.414000 audit[4762]: AVC avc: denied { bpf } for pid=4762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.414000 audit[4762]: AVC avc: denied { perfmon } for pid=4762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.414000 audit[4762]: AVC avc: denied { perfmon } for pid=4762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.414000 audit[4762]: AVC avc: denied { perfmon } for pid=4762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.414000 audit[4762]: AVC avc: denied { perfmon } for pid=4762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.414000 audit[4762]: AVC avc: denied { perfmon } for pid=4762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.414000 audit[4762]: AVC avc: denied { bpf } for pid=4762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.414000 audit[4762]: AVC avc: denied { bpf } for pid=4762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.414000 audit: BPF prog-id=10 op=LOAD Sep 13 00:05:51.414000 audit[4762]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd1c4cfd8 a2=98 a3=ffffd1c4cfc8 items=0 ppid=4415 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.414000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:05:51.415000 audit: BPF prog-id=10 op=UNLOAD Sep 13 00:05:51.415000 audit[4762]: AVC avc: denied { bpf } for pid=4762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.415000 audit[4762]: AVC avc: denied { bpf } for pid=4762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.415000 audit[4762]: AVC avc: denied { perfmon } for pid=4762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.415000 audit[4762]: AVC avc: denied { perfmon } for pid=4762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.415000 audit[4762]: AVC avc: denied { perfmon } for pid=4762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.415000 audit[4762]: AVC avc: denied { perfmon } for pid=4762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.415000 audit[4762]: AVC avc: denied { perfmon } for pid=4762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.415000 audit[4762]: AVC avc: denied { bpf } for pid=4762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.415000 audit[4762]: AVC avc: denied { bpf } for pid=4762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.415000 audit: BPF prog-id=11 op=LOAD Sep 13 00:05:51.415000 audit[4762]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd1c4ce88 a2=74 a3=95 items=0 ppid=4415 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.415000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:05:51.420000 audit: BPF prog-id=11 op=UNLOAD Sep 13 00:05:51.420000 audit[4762]: AVC avc: denied { bpf } for pid=4762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.420000 audit[4762]: AVC avc: denied { bpf } for pid=4762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.420000 audit[4762]: AVC avc: denied { perfmon } for pid=4762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.420000 audit[4762]: AVC avc: denied { perfmon } for pid=4762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.420000 audit[4762]: AVC avc: denied { perfmon } for pid=4762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.420000 audit[4762]: AVC avc: denied { perfmon } for pid=4762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.420000 audit[4762]: AVC avc: denied { perfmon } for pid=4762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.420000 audit[4762]: AVC avc: denied { bpf } for pid=4762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.420000 audit[4762]: AVC avc: denied { bpf } for pid=4762 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.420000 audit: BPF prog-id=12 op=LOAD Sep 13 00:05:51.420000 audit[4762]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd1c4ceb8 a2=40 a3=ffffd1c4cee8 items=0 ppid=4415 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.420000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:05:51.421000 audit: BPF prog-id=12 op=UNLOAD Sep 13 00:05:51.421000 audit[4762]: AVC avc: denied { perfmon } for pid=4762 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.421000 audit[4762]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=0 a1=ffffd1c4cfd0 a2=50 a3=0 items=0 ppid=4415 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.421000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:05:51.432000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.432000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.432000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.432000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.432000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.432000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.432000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.432000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.432000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.432000 audit: BPF prog-id=13 op=LOAD Sep 13 00:05:51.432000 audit[4765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff7718e28 a2=98 a3=fffff7718e18 items=0 ppid=4415 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.432000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:05:51.432000 audit: BPF prog-id=13 op=UNLOAD Sep 13 00:05:51.432000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.432000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.432000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.432000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.432000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.432000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.432000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.432000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.432000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.432000 audit: BPF prog-id=14 op=LOAD Sep 13 00:05:51.432000 audit[4765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff7718ab8 a2=74 a3=95 items=0 ppid=4415 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.432000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:05:51.436000 audit: BPF prog-id=14 op=UNLOAD Sep 13 00:05:51.436000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.436000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.436000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.436000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.436000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.436000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.436000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.436000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.436000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.436000 audit: BPF prog-id=15 op=LOAD Sep 13 00:05:51.436000 audit[4765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff7718b18 a2=94 a3=2 items=0 ppid=4415 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.436000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:05:51.436000 audit: BPF prog-id=15 op=UNLOAD Sep 13 00:05:51.745403 env[1920]: time="2025-09-13T00:05:51.745217735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-656c987b58-8tccj,Uid:4a3166ba-ae51-4f91-9351-f02ac6945348,Namespace:calico-system,Attempt:0,} returns sandbox id \"ff48d1fadd4a3e58e0bdebb1872e609b966d2148fb4b04c0efc894f4d0161838\"" Sep 13 00:05:51.753697 env[1920]: time="2025-09-13T00:05:51.753641656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:05:51.790000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.790000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.790000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.790000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.790000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.790000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.790000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.790000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.790000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.790000 audit: BPF prog-id=16 op=LOAD Sep 13 00:05:51.790000 audit[4765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff7718ad8 a2=40 a3=fffff7718b08 items=0 ppid=4415 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.790000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:05:51.791000 audit: BPF prog-id=16 op=UNLOAD Sep 13 00:05:51.791000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.791000 audit[4765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=fffff7718bf0 a2=50 a3=0 items=0 ppid=4415 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.791000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:05:51.809000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.809000 audit[4765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff7718b48 a2=28 a3=fffff7718c78 items=0 ppid=4415 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.809000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:05:51.810000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.810000 audit[4765]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff7718b78 a2=28 a3=fffff7718ca8 items=0 ppid=4415 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.810000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:05:51.811000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.811000 audit[4765]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff7718a28 a2=28 a3=fffff7718b58 items=0 ppid=4415 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.811000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:05:51.811000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.811000 audit[4765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff7718b98 a2=28 a3=fffff7718cc8 items=0 ppid=4415 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.811000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:05:51.811000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.811000 audit[4765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff7718b78 a2=28 a3=fffff7718ca8 items=0 ppid=4415 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.811000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:05:51.811000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.811000 audit[4765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff7718b68 a2=28 a3=fffff7718c98 items=0 ppid=4415 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.811000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:05:51.811000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.811000 audit[4765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff7718b98 a2=28 a3=fffff7718cc8 items=0 ppid=4415 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.811000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:05:51.811000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.811000 audit[4765]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff7718b78 a2=28 a3=fffff7718ca8 items=0 ppid=4415 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.811000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:05:51.811000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.811000 audit[4765]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff7718b98 a2=28 a3=fffff7718cc8 items=0 ppid=4415 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.811000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:05:51.811000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.811000 audit[4765]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff7718b68 a2=28 a3=fffff7718c98 items=0 ppid=4415 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.811000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:05:51.811000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.811000 audit[4765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff7718be8 a2=28 a3=fffff7718d28 items=0 ppid=4415 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.811000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:05:51.812000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.812000 audit[4765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=fffff7718920 a2=50 a3=0 items=0 ppid=4415 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.812000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:05:51.812000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.812000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.812000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.812000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.812000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.812000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.812000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.812000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.812000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.812000 audit: BPF prog-id=17 op=LOAD Sep 13 00:05:51.812000 audit[4765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff7718928 a2=94 a3=5 items=0 ppid=4415 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.812000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:05:51.812000 audit: BPF prog-id=17 op=UNLOAD Sep 13 00:05:51.812000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.812000 audit[4765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=fffff7718a30 a2=50 a3=0 items=0 ppid=4415 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.812000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:05:51.812000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.812000 audit[4765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=fffff7718b78 a2=4 a3=3 items=0 ppid=4415 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.812000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:05:51.812000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.812000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.812000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.812000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.812000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.812000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.812000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.812000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.812000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.812000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.812000 audit[4765]: AVC avc: denied { confidentiality } for pid=4765 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:05:51.812000 audit[4765]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffff7718b58 a2=94 a3=6 items=0 ppid=4415 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.812000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:05:51.817000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.817000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.817000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.817000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.817000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.817000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.817000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.817000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.817000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.817000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.817000 audit[4765]: AVC avc: denied { confidentiality } for pid=4765 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:05:51.817000 audit[4765]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffff7718328 a2=94 a3=83 items=0 ppid=4415 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.817000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:05:51.818000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.818000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.818000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.818000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.818000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.818000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.818000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.818000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.818000 audit[4765]: AVC avc: denied { perfmon } for pid=4765 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.818000 audit[4765]: AVC avc: denied { bpf } for pid=4765 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.818000 audit[4765]: AVC avc: denied { confidentiality } for pid=4765 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:05:51.818000 audit[4765]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffff7718328 a2=94 a3=83 items=0 ppid=4415 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.818000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:05:51.835468 env[1920]: 2025-09-13 00:05:51.358 [INFO][4662] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" Sep 13 00:05:51.835468 env[1920]: 2025-09-13 00:05:51.359 [INFO][4662] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" iface="eth0" netns="/var/run/netns/cni-7b34943e-b227-eded-0417-dac04359c784" Sep 13 00:05:51.835468 env[1920]: 2025-09-13 00:05:51.360 [INFO][4662] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" iface="eth0" netns="/var/run/netns/cni-7b34943e-b227-eded-0417-dac04359c784" Sep 13 00:05:51.835468 env[1920]: 2025-09-13 00:05:51.361 [INFO][4662] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" iface="eth0" netns="/var/run/netns/cni-7b34943e-b227-eded-0417-dac04359c784" Sep 13 00:05:51.835468 env[1920]: 2025-09-13 00:05:51.361 [INFO][4662] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" Sep 13 00:05:51.835468 env[1920]: 2025-09-13 00:05:51.361 [INFO][4662] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" Sep 13 00:05:51.835468 env[1920]: 2025-09-13 00:05:51.779 [INFO][4753] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" HandleID="k8s-pod-network.8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" Workload="ip--172--31--29--1-k8s-csi--node--driver--cdkj7-eth0" Sep 13 00:05:51.835468 env[1920]: 2025-09-13 00:05:51.783 [INFO][4753] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:51.835468 env[1920]: 2025-09-13 00:05:51.784 [INFO][4753] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:51.835468 env[1920]: 2025-09-13 00:05:51.811 [WARNING][4753] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" HandleID="k8s-pod-network.8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" Workload="ip--172--31--29--1-k8s-csi--node--driver--cdkj7-eth0" Sep 13 00:05:51.835468 env[1920]: 2025-09-13 00:05:51.811 [INFO][4753] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" HandleID="k8s-pod-network.8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" Workload="ip--172--31--29--1-k8s-csi--node--driver--cdkj7-eth0" Sep 13 00:05:51.835468 env[1920]: 2025-09-13 00:05:51.815 [INFO][4753] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:51.835468 env[1920]: 2025-09-13 00:05:51.832 [INFO][4662] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" Sep 13 00:05:51.840122 env[1920]: time="2025-09-13T00:05:51.840040137Z" level=info msg="TearDown network for sandbox \"8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85\" successfully" Sep 13 00:05:51.840461 env[1920]: time="2025-09-13T00:05:51.840297133Z" level=info msg="StopPodSandbox for \"8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85\" returns successfully" Sep 13 00:05:51.842240 env[1920]: time="2025-09-13T00:05:51.842185118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cdkj7,Uid:b0810e85-a772-4d04-a431-1cf9fa4bc8f8,Namespace:calico-system,Attempt:1,}" Sep 13 00:05:51.843451 systemd[1]: run-netns-cni\x2d7b34943e\x2db227\x2deded\x2d0417\x2ddac04359c784.mount: Deactivated successfully. Sep 13 00:05:51.857000 audit[4799]: AVC avc: denied { bpf } for pid=4799 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.857000 audit[4799]: AVC avc: denied { bpf } for pid=4799 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.857000 audit[4799]: AVC avc: denied { perfmon } for pid=4799 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.857000 audit[4799]: AVC avc: denied { perfmon } for pid=4799 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.857000 audit[4799]: AVC avc: denied { perfmon } for pid=4799 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.857000 audit[4799]: AVC avc: denied { perfmon } for pid=4799 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.857000 audit[4799]: AVC avc: denied { perfmon } for pid=4799 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.857000 audit[4799]: AVC avc: denied { bpf } for pid=4799 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.857000 audit[4799]: AVC avc: denied { bpf } for pid=4799 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.857000 audit: BPF prog-id=18 op=LOAD Sep 13 00:05:51.857000 audit[4799]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffefc42438 a2=98 a3=ffffefc42428 items=0 ppid=4415 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.857000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 13 00:05:51.857000 audit: BPF prog-id=18 op=UNLOAD Sep 13 00:05:51.857000 audit[4799]: AVC avc: denied { bpf } for pid=4799 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.857000 audit[4799]: AVC avc: denied { bpf } for pid=4799 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.857000 audit[4799]: AVC avc: denied { perfmon } for pid=4799 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.857000 audit[4799]: AVC avc: denied { perfmon } for pid=4799 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.857000 audit[4799]: AVC avc: denied { perfmon } for pid=4799 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.857000 audit[4799]: AVC avc: denied { perfmon } for pid=4799 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.857000 audit[4799]: AVC avc: denied { perfmon } for pid=4799 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.857000 audit[4799]: AVC avc: denied { bpf } for pid=4799 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.857000 audit[4799]: AVC avc: denied { bpf } for pid=4799 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.857000 audit: BPF prog-id=19 op=LOAD Sep 13 00:05:51.857000 audit[4799]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffefc422e8 a2=74 a3=95 items=0 ppid=4415 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.857000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 13 00:05:51.858000 audit: BPF prog-id=19 op=UNLOAD Sep 13 00:05:51.858000 audit[4799]: AVC avc: denied { bpf } for pid=4799 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.858000 audit[4799]: AVC avc: denied { bpf } for pid=4799 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.858000 audit[4799]: AVC avc: denied { perfmon } for pid=4799 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.858000 audit[4799]: AVC avc: denied { perfmon } for pid=4799 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.858000 audit[4799]: AVC avc: denied { perfmon } for pid=4799 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.858000 audit[4799]: AVC avc: denied { perfmon } for pid=4799 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.858000 audit[4799]: AVC avc: denied { perfmon } for pid=4799 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.858000 audit[4799]: AVC avc: denied { bpf } for pid=4799 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.858000 audit[4799]: AVC avc: denied { bpf } for pid=4799 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:51.858000 audit: BPF prog-id=20 op=LOAD Sep 13 00:05:51.858000 audit[4799]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffefc42318 a2=40 a3=ffffefc42348 items=0 ppid=4415 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:51.858000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 13 00:05:51.858000 audit: BPF prog-id=20 op=UNLOAD Sep 13 00:05:52.129713 systemd-networkd[1599]: cali6d9fff504e4: Link UP Sep 13 00:05:52.141532 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:05:52.141686 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6d9fff504e4: link becomes ready Sep 13 00:05:52.141159 systemd-networkd[1599]: cali6d9fff504e4: Gained carrier Sep 13 00:05:52.154747 systemd-networkd[1599]: vxlan.calico: Link UP Sep 13 00:05:52.154760 systemd-networkd[1599]: vxlan.calico: Gained carrier Sep 13 00:05:52.224642 env[1920]: 2025-09-13 00:05:51.411 [INFO][4675] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--tpkzf-eth0 calico-apiserver-8978b56b9- calico-apiserver d0a25691-16ce-4727-a17e-3adde345074c 969 0 2025-09-13 00:05:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8978b56b9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-29-1 calico-apiserver-8978b56b9-tpkzf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6d9fff504e4 [] [] }} ContainerID="894b43e3dc548a07ab30eeca8bab3d67cdf388423ef8ef0ea5c3b88f1acd13b6" Namespace="calico-apiserver" Pod="calico-apiserver-8978b56b9-tpkzf" WorkloadEndpoint="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--tpkzf-" Sep 13 00:05:52.224642 env[1920]: 2025-09-13 00:05:51.411 [INFO][4675] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="894b43e3dc548a07ab30eeca8bab3d67cdf388423ef8ef0ea5c3b88f1acd13b6" Namespace="calico-apiserver" Pod="calico-apiserver-8978b56b9-tpkzf" WorkloadEndpoint="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--tpkzf-eth0" Sep 13 00:05:52.224642 env[1920]: 2025-09-13 00:05:51.843 [INFO][4775] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="894b43e3dc548a07ab30eeca8bab3d67cdf388423ef8ef0ea5c3b88f1acd13b6" HandleID="k8s-pod-network.894b43e3dc548a07ab30eeca8bab3d67cdf388423ef8ef0ea5c3b88f1acd13b6" Workload="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--tpkzf-eth0" Sep 13 00:05:52.224642 env[1920]: 2025-09-13 00:05:51.845 [INFO][4775] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="894b43e3dc548a07ab30eeca8bab3d67cdf388423ef8ef0ea5c3b88f1acd13b6" HandleID="k8s-pod-network.894b43e3dc548a07ab30eeca8bab3d67cdf388423ef8ef0ea5c3b88f1acd13b6" Workload="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--tpkzf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000328760), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-29-1", "pod":"calico-apiserver-8978b56b9-tpkzf", "timestamp":"2025-09-13 00:05:51.843620666 +0000 UTC"}, Hostname:"ip-172-31-29-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:05:52.224642 env[1920]: 2025-09-13 00:05:51.845 [INFO][4775] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:52.224642 env[1920]: 2025-09-13 00:05:51.846 [INFO][4775] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:52.224642 env[1920]: 2025-09-13 00:05:51.846 [INFO][4775] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-1' Sep 13 00:05:52.224642 env[1920]: 2025-09-13 00:05:51.870 [INFO][4775] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.894b43e3dc548a07ab30eeca8bab3d67cdf388423ef8ef0ea5c3b88f1acd13b6" host="ip-172-31-29-1" Sep 13 00:05:52.224642 env[1920]: 2025-09-13 00:05:51.916 [INFO][4775] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-1" Sep 13 00:05:52.224642 env[1920]: 2025-09-13 00:05:51.969 [INFO][4775] ipam/ipam.go 511: Trying affinity for 192.168.50.64/26 host="ip-172-31-29-1" Sep 13 00:05:52.224642 env[1920]: 2025-09-13 00:05:51.986 [INFO][4775] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.64/26 host="ip-172-31-29-1" Sep 13 00:05:52.224642 env[1920]: 2025-09-13 00:05:51.998 [INFO][4775] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.64/26 host="ip-172-31-29-1" Sep 13 00:05:52.224642 env[1920]: 2025-09-13 00:05:51.998 [INFO][4775] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.64/26 handle="k8s-pod-network.894b43e3dc548a07ab30eeca8bab3d67cdf388423ef8ef0ea5c3b88f1acd13b6" host="ip-172-31-29-1" Sep 13 00:05:52.224642 env[1920]: 2025-09-13 00:05:52.009 [INFO][4775] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.894b43e3dc548a07ab30eeca8bab3d67cdf388423ef8ef0ea5c3b88f1acd13b6 Sep 13 00:05:52.224642 env[1920]: 2025-09-13 00:05:52.040 [INFO][4775] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.64/26 handle="k8s-pod-network.894b43e3dc548a07ab30eeca8bab3d67cdf388423ef8ef0ea5c3b88f1acd13b6" host="ip-172-31-29-1" Sep 13 00:05:52.224642 env[1920]: 2025-09-13 00:05:52.056 [INFO][4775] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.67/26] block=192.168.50.64/26 handle="k8s-pod-network.894b43e3dc548a07ab30eeca8bab3d67cdf388423ef8ef0ea5c3b88f1acd13b6" host="ip-172-31-29-1" Sep 13 00:05:52.224642 env[1920]: 2025-09-13 00:05:52.056 [INFO][4775] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.67/26] handle="k8s-pod-network.894b43e3dc548a07ab30eeca8bab3d67cdf388423ef8ef0ea5c3b88f1acd13b6" host="ip-172-31-29-1" Sep 13 00:05:52.224642 env[1920]: 2025-09-13 00:05:52.056 [INFO][4775] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:52.224642 env[1920]: 2025-09-13 00:05:52.056 [INFO][4775] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.67/26] IPv6=[] ContainerID="894b43e3dc548a07ab30eeca8bab3d67cdf388423ef8ef0ea5c3b88f1acd13b6" HandleID="k8s-pod-network.894b43e3dc548a07ab30eeca8bab3d67cdf388423ef8ef0ea5c3b88f1acd13b6" Workload="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--tpkzf-eth0" Sep 13 00:05:52.226158 env[1920]: 2025-09-13 00:05:52.099 [INFO][4675] cni-plugin/k8s.go 418: Populated endpoint ContainerID="894b43e3dc548a07ab30eeca8bab3d67cdf388423ef8ef0ea5c3b88f1acd13b6" Namespace="calico-apiserver" Pod="calico-apiserver-8978b56b9-tpkzf" WorkloadEndpoint="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--tpkzf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--tpkzf-eth0", GenerateName:"calico-apiserver-8978b56b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"d0a25691-16ce-4727-a17e-3adde345074c", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8978b56b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-1", ContainerID:"", Pod:"calico-apiserver-8978b56b9-tpkzf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6d9fff504e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:52.226158 env[1920]: 2025-09-13 00:05:52.099 [INFO][4675] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.67/32] ContainerID="894b43e3dc548a07ab30eeca8bab3d67cdf388423ef8ef0ea5c3b88f1acd13b6" Namespace="calico-apiserver" Pod="calico-apiserver-8978b56b9-tpkzf" WorkloadEndpoint="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--tpkzf-eth0" Sep 13 00:05:52.226158 env[1920]: 2025-09-13 00:05:52.099 [INFO][4675] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d9fff504e4 ContainerID="894b43e3dc548a07ab30eeca8bab3d67cdf388423ef8ef0ea5c3b88f1acd13b6" Namespace="calico-apiserver" Pod="calico-apiserver-8978b56b9-tpkzf" WorkloadEndpoint="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--tpkzf-eth0" Sep 13 00:05:52.226158 env[1920]: 2025-09-13 00:05:52.151 [INFO][4675] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="894b43e3dc548a07ab30eeca8bab3d67cdf388423ef8ef0ea5c3b88f1acd13b6" Namespace="calico-apiserver" Pod="calico-apiserver-8978b56b9-tpkzf" WorkloadEndpoint="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--tpkzf-eth0" Sep 13 00:05:52.226158 env[1920]: 2025-09-13 00:05:52.152 [INFO][4675] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="894b43e3dc548a07ab30eeca8bab3d67cdf388423ef8ef0ea5c3b88f1acd13b6" Namespace="calico-apiserver" Pod="calico-apiserver-8978b56b9-tpkzf" WorkloadEndpoint="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--tpkzf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--tpkzf-eth0", GenerateName:"calico-apiserver-8978b56b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"d0a25691-16ce-4727-a17e-3adde345074c", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8978b56b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-1", ContainerID:"894b43e3dc548a07ab30eeca8bab3d67cdf388423ef8ef0ea5c3b88f1acd13b6", Pod:"calico-apiserver-8978b56b9-tpkzf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6d9fff504e4", MAC:"02:18:d5:e5:6d:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:52.226158 env[1920]: 2025-09-13 00:05:52.183 [INFO][4675] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="894b43e3dc548a07ab30eeca8bab3d67cdf388423ef8ef0ea5c3b88f1acd13b6" Namespace="calico-apiserver" Pod="calico-apiserver-8978b56b9-tpkzf" WorkloadEndpoint="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--tpkzf-eth0" Sep 13 00:05:52.226158 env[1920]: 2025-09-13 00:05:51.377 [INFO][4645] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" Sep 13 00:05:52.226158 env[1920]: 2025-09-13 00:05:51.378 [INFO][4645] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" iface="eth0" netns="/var/run/netns/cni-5dcbba7a-443c-d0cd-fbf9-f2598a1602b4" Sep 13 00:05:52.226916 env[1920]: 2025-09-13 00:05:51.378 [INFO][4645] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" iface="eth0" netns="/var/run/netns/cni-5dcbba7a-443c-d0cd-fbf9-f2598a1602b4" Sep 13 00:05:52.226916 env[1920]: 2025-09-13 00:05:51.378 [INFO][4645] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" iface="eth0" netns="/var/run/netns/cni-5dcbba7a-443c-d0cd-fbf9-f2598a1602b4" Sep 13 00:05:52.226916 env[1920]: 2025-09-13 00:05:51.378 [INFO][4645] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" Sep 13 00:05:52.226916 env[1920]: 2025-09-13 00:05:51.378 [INFO][4645] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" Sep 13 00:05:52.226916 env[1920]: 2025-09-13 00:05:52.034 [INFO][4761] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" HandleID="k8s-pod-network.dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" Workload="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--fkdwg-eth0" Sep 13 00:05:52.226916 env[1920]: 2025-09-13 00:05:52.036 [INFO][4761] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:52.226916 env[1920]: 2025-09-13 00:05:52.056 [INFO][4761] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:52.226916 env[1920]: 2025-09-13 00:05:52.129 [WARNING][4761] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" HandleID="k8s-pod-network.dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" Workload="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--fkdwg-eth0" Sep 13 00:05:52.226916 env[1920]: 2025-09-13 00:05:52.129 [INFO][4761] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" HandleID="k8s-pod-network.dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" Workload="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--fkdwg-eth0" Sep 13 00:05:52.226916 env[1920]: 2025-09-13 00:05:52.144 [INFO][4761] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:52.226916 env[1920]: 2025-09-13 00:05:52.192 [INFO][4645] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" Sep 13 00:05:52.229700 env[1920]: time="2025-09-13T00:05:52.227125726Z" level=info msg="TearDown network for sandbox \"dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec\" successfully" Sep 13 00:05:52.229700 env[1920]: time="2025-09-13T00:05:52.227241334Z" level=info msg="StopPodSandbox for \"dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec\" returns successfully" Sep 13 00:05:52.229898 env[1920]: time="2025-09-13T00:05:52.229725345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fkdwg,Uid:42009314-cb57-4945-ba05-c28efb80272b,Namespace:kube-system,Attempt:1,}" Sep 13 00:05:52.346000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.346000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.346000 audit[4856]: AVC avc: denied { perfmon } for pid=4856 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.346000 audit[4856]: AVC avc: denied { perfmon } for pid=4856 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.346000 audit[4856]: AVC avc: denied { perfmon } for pid=4856 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.346000 audit[4856]: AVC avc: denied { perfmon } for pid=4856 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.346000 audit[4856]: AVC avc: denied { perfmon } for pid=4856 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.346000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.346000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.346000 audit: BPF prog-id=21 op=LOAD Sep 13 00:05:52.346000 audit[4856]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc7aaeec8 a2=98 a3=ffffc7aaeeb8 items=0 ppid=4415 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:52.346000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:05:52.350000 audit: BPF prog-id=21 op=UNLOAD Sep 13 00:05:52.351000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.351000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.351000 audit[4856]: AVC avc: denied { perfmon } for pid=4856 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.351000 audit[4856]: AVC avc: denied { perfmon } for pid=4856 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.351000 audit[4856]: AVC avc: denied { perfmon } for pid=4856 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.351000 audit[4856]: AVC avc: denied { perfmon } for pid=4856 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.351000 audit[4856]: AVC avc: denied { perfmon } for pid=4856 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.351000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.351000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.351000 audit: BPF prog-id=22 op=LOAD Sep 13 00:05:52.351000 audit[4856]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc7aaeba8 a2=74 a3=95 items=0 ppid=4415 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:52.351000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:05:52.351000 audit: BPF prog-id=22 op=UNLOAD Sep 13 00:05:52.351000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.351000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.351000 audit[4856]: AVC avc: denied { perfmon } for pid=4856 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.351000 audit[4856]: AVC avc: denied { perfmon } for pid=4856 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.351000 audit[4856]: AVC avc: denied { perfmon } for pid=4856 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.351000 audit[4856]: AVC avc: denied { perfmon } for pid=4856 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.351000 audit[4856]: AVC avc: denied { perfmon } for pid=4856 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.351000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.351000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.351000 audit: BPF prog-id=23 op=LOAD Sep 13 00:05:52.351000 audit[4856]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc7aaec08 a2=94 a3=2 items=0 ppid=4415 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:52.351000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:05:52.352000 audit: BPF prog-id=23 op=UNLOAD Sep 13 00:05:52.352000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.352000 audit[4856]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc7aaec38 a2=28 a3=ffffc7aaed68 items=0 ppid=4415 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:52.352000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:05:52.352000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.352000 audit[4856]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc7aaec68 a2=28 a3=ffffc7aaed98 items=0 ppid=4415 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:52.352000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:05:52.352000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.352000 audit[4856]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc7aaeb18 a2=28 a3=ffffc7aaec48 items=0 ppid=4415 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:52.352000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:05:52.352000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.352000 audit[4856]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc7aaec88 a2=28 a3=ffffc7aaedb8 items=0 ppid=4415 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:52.352000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:05:52.352000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.352000 audit[4856]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc7aaec68 a2=28 a3=ffffc7aaed98 items=0 ppid=4415 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:52.352000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:05:52.352000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.352000 audit[4856]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc7aaec58 a2=28 a3=ffffc7aaed88 items=0 ppid=4415 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:52.352000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:05:52.352000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.352000 audit[4856]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc7aaec88 a2=28 a3=ffffc7aaedb8 items=0 ppid=4415 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:52.352000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:05:52.352000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.352000 audit[4856]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc7aaec68 a2=28 a3=ffffc7aaed98 items=0 ppid=4415 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:52.352000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:05:52.352000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.352000 audit[4856]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc7aaec88 a2=28 a3=ffffc7aaedb8 items=0 ppid=4415 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:52.352000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:05:52.352000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.352000 audit[4856]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc7aaec58 a2=28 a3=ffffc7aaed88 items=0 ppid=4415 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:52.352000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:05:52.352000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.352000 audit[4856]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc7aaecd8 a2=28 a3=ffffc7aaee18 items=0 ppid=4415 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:52.352000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:05:52.355000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.355000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.355000 audit[4856]: AVC avc: denied { perfmon } for pid=4856 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.355000 audit[4856]: AVC avc: denied { perfmon } for pid=4856 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.355000 audit[4856]: AVC avc: denied { perfmon } for pid=4856 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.355000 audit[4856]: AVC avc: denied { perfmon } for pid=4856 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.355000 audit[4856]: AVC avc: denied { perfmon } for pid=4856 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.355000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.355000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.355000 audit: BPF prog-id=24 op=LOAD Sep 13 00:05:52.355000 audit[4856]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc7aaeaf8 a2=40 a3=ffffc7aaeb28 items=0 ppid=4415 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:52.355000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:05:52.356000 audit: BPF prog-id=24 op=UNLOAD Sep 13 00:05:52.356000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.356000 audit[4856]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=0 a1=ffffc7aaeb20 a2=50 a3=0 items=0 ppid=4415 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:52.356000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:05:52.357000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.357000 audit[4856]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=0 a1=ffffc7aaeb20 a2=50 a3=0 items=0 ppid=4415 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:52.357000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:05:52.357000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.357000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.357000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.357000 audit[4856]: AVC avc: denied { perfmon } for pid=4856 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.357000 audit[4856]: AVC avc: denied { perfmon } for pid=4856 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.357000 audit[4856]: AVC avc: denied { perfmon } for pid=4856 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.357000 audit[4856]: AVC avc: denied { perfmon } for pid=4856 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.357000 audit[4856]: AVC avc: denied { perfmon } for pid=4856 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.357000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.357000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.357000 audit: BPF prog-id=25 op=LOAD Sep 13 00:05:52.357000 audit[4856]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc7aae288 a2=94 a3=2 items=0 ppid=4415 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:52.357000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:05:52.357000 audit: BPF prog-id=25 op=UNLOAD Sep 13 00:05:52.357000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.357000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.357000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.357000 audit[4856]: AVC avc: denied { perfmon } for pid=4856 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.357000 audit[4856]: AVC avc: denied { perfmon } for pid=4856 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.357000 audit[4856]: AVC avc: denied { perfmon } for pid=4856 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.357000 audit[4856]: AVC avc: denied { perfmon } for pid=4856 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.357000 audit[4856]: AVC avc: denied { perfmon } for pid=4856 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.357000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.357000 audit[4856]: AVC avc: denied { bpf } for pid=4856 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.357000 audit: BPF prog-id=26 op=LOAD Sep 13 00:05:52.357000 audit[4856]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc7aae418 a2=94 a3=30 items=0 ppid=4415 pid=4856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:52.357000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:05:52.385000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.385000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.385000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.385000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.385000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.385000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.385000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.385000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.385000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.385000 audit: BPF prog-id=27 op=LOAD Sep 13 00:05:52.385000 audit[4859]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd71ccb98 a2=98 a3=ffffd71ccb88 items=0 ppid=4415 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:52.385000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:05:52.385000 audit: BPF prog-id=27 op=UNLOAD Sep 13 00:05:52.386000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.386000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.386000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.386000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.386000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.386000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.386000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.386000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.386000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.386000 audit: BPF prog-id=28 op=LOAD Sep 13 00:05:52.386000 audit[4859]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd71cc828 a2=74 a3=95 items=0 ppid=4415 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:52.386000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:05:52.386000 audit: BPF prog-id=28 op=UNLOAD Sep 13 00:05:52.386000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.386000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.386000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.386000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.386000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.386000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.386000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.386000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.386000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:52.386000 audit: BPF prog-id=29 op=LOAD Sep 13 00:05:52.386000 audit[4859]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd71cc888 a2=94 a3=2 items=0 ppid=4415 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:52.386000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:05:52.386000 audit: BPF prog-id=29 op=UNLOAD Sep 13 00:05:52.473117 systemd[1]: run-netns-cni\x2d5dcbba7a\x2d443c\x2dd0cd\x2dfbf9\x2df2598a1602b4.mount: Deactivated successfully. Sep 13 00:05:52.577666 env[1920]: time="2025-09-13T00:05:52.577388928Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:52.579518 env[1920]: time="2025-09-13T00:05:52.579423852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:52.580368 env[1920]: time="2025-09-13T00:05:52.580161198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:52.588002 env[1920]: time="2025-09-13T00:05:52.587675889Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/894b43e3dc548a07ab30eeca8bab3d67cdf388423ef8ef0ea5c3b88f1acd13b6 pid=4862 runtime=io.containerd.runc.v2 Sep 13 00:05:52.618976 env[1920]: time="2025-09-13T00:05:52.618903759Z" level=info msg="StopPodSandbox for \"dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b\"" Sep 13 00:05:52.763067 systemd-networkd[1599]: cali39934a37d19: Link UP Sep 13 00:05:52.772799 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali39934a37d19: link becomes ready Sep 13 00:05:52.772023 systemd-networkd[1599]: cali39934a37d19: Gained carrier Sep 13 00:05:52.809199 systemd-networkd[1599]: cali5bf21a77f93: Gained IPv6LL Sep 13 00:05:52.855720 env[1920]: 2025-09-13 00:05:51.969 [INFO][4733] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--mstsf-eth0 calico-apiserver-8978b56b9- calico-apiserver 527909b5-007b-4b06-af5d-92e50eb61126 973 0 2025-09-13 00:05:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8978b56b9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-29-1 calico-apiserver-8978b56b9-mstsf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali39934a37d19 [] [] }} ContainerID="c6597bd5b30b19a93626ba95673fd81ae6461643bedc3c601682b99b37ebff2a" Namespace="calico-apiserver" Pod="calico-apiserver-8978b56b9-mstsf" WorkloadEndpoint="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--mstsf-" Sep 13 00:05:52.855720 env[1920]: 2025-09-13 00:05:51.969 [INFO][4733] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c6597bd5b30b19a93626ba95673fd81ae6461643bedc3c601682b99b37ebff2a" Namespace="calico-apiserver" Pod="calico-apiserver-8978b56b9-mstsf" WorkloadEndpoint="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--mstsf-eth0" Sep 13 00:05:52.855720 env[1920]: 2025-09-13 00:05:52.518 [INFO][4822] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6597bd5b30b19a93626ba95673fd81ae6461643bedc3c601682b99b37ebff2a" HandleID="k8s-pod-network.c6597bd5b30b19a93626ba95673fd81ae6461643bedc3c601682b99b37ebff2a" Workload="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--mstsf-eth0" Sep 13 00:05:52.855720 env[1920]: 2025-09-13 00:05:52.518 [INFO][4822] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c6597bd5b30b19a93626ba95673fd81ae6461643bedc3c601682b99b37ebff2a" HandleID="k8s-pod-network.c6597bd5b30b19a93626ba95673fd81ae6461643bedc3c601682b99b37ebff2a" Workload="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--mstsf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000328390), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-29-1", "pod":"calico-apiserver-8978b56b9-mstsf", "timestamp":"2025-09-13 00:05:52.517979831 +0000 UTC"}, Hostname:"ip-172-31-29-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:05:52.855720 env[1920]: 2025-09-13 00:05:52.518 [INFO][4822] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:52.855720 env[1920]: 2025-09-13 00:05:52.518 [INFO][4822] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:52.855720 env[1920]: 2025-09-13 00:05:52.518 [INFO][4822] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-1' Sep 13 00:05:52.855720 env[1920]: 2025-09-13 00:05:52.553 [INFO][4822] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c6597bd5b30b19a93626ba95673fd81ae6461643bedc3c601682b99b37ebff2a" host="ip-172-31-29-1" Sep 13 00:05:52.855720 env[1920]: 2025-09-13 00:05:52.595 [INFO][4822] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-1" Sep 13 00:05:52.855720 env[1920]: 2025-09-13 00:05:52.654 [INFO][4822] ipam/ipam.go 511: Trying affinity for 192.168.50.64/26 host="ip-172-31-29-1" Sep 13 00:05:52.855720 env[1920]: 2025-09-13 00:05:52.660 [INFO][4822] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.64/26 host="ip-172-31-29-1" Sep 13 00:05:52.855720 env[1920]: 2025-09-13 00:05:52.665 [INFO][4822] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.64/26 host="ip-172-31-29-1" Sep 13 00:05:52.855720 env[1920]: 2025-09-13 00:05:52.665 [INFO][4822] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.64/26 handle="k8s-pod-network.c6597bd5b30b19a93626ba95673fd81ae6461643bedc3c601682b99b37ebff2a" host="ip-172-31-29-1" Sep 13 00:05:52.855720 env[1920]: 2025-09-13 00:05:52.673 [INFO][4822] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c6597bd5b30b19a93626ba95673fd81ae6461643bedc3c601682b99b37ebff2a Sep 13 00:05:52.855720 env[1920]: 2025-09-13 00:05:52.696 [INFO][4822] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.64/26 handle="k8s-pod-network.c6597bd5b30b19a93626ba95673fd81ae6461643bedc3c601682b99b37ebff2a" host="ip-172-31-29-1" Sep 13 00:05:52.855720 env[1920]: 2025-09-13 00:05:52.730 [INFO][4822] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.68/26] block=192.168.50.64/26 handle="k8s-pod-network.c6597bd5b30b19a93626ba95673fd81ae6461643bedc3c601682b99b37ebff2a" host="ip-172-31-29-1" Sep 13 00:05:52.855720 env[1920]: 2025-09-13 00:05:52.730 [INFO][4822] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.68/26] handle="k8s-pod-network.c6597bd5b30b19a93626ba95673fd81ae6461643bedc3c601682b99b37ebff2a" host="ip-172-31-29-1" Sep 13 00:05:52.855720 env[1920]: 2025-09-13 00:05:52.730 [INFO][4822] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:52.855720 env[1920]: 2025-09-13 00:05:52.730 [INFO][4822] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.68/26] IPv6=[] ContainerID="c6597bd5b30b19a93626ba95673fd81ae6461643bedc3c601682b99b37ebff2a" HandleID="k8s-pod-network.c6597bd5b30b19a93626ba95673fd81ae6461643bedc3c601682b99b37ebff2a" Workload="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--mstsf-eth0" Sep 13 00:05:52.857414 env[1920]: 2025-09-13 00:05:52.734 [INFO][4733] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c6597bd5b30b19a93626ba95673fd81ae6461643bedc3c601682b99b37ebff2a" Namespace="calico-apiserver" Pod="calico-apiserver-8978b56b9-mstsf" WorkloadEndpoint="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--mstsf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--mstsf-eth0", GenerateName:"calico-apiserver-8978b56b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"527909b5-007b-4b06-af5d-92e50eb61126", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8978b56b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-1", ContainerID:"", Pod:"calico-apiserver-8978b56b9-mstsf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali39934a37d19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:52.857414 env[1920]: 2025-09-13 00:05:52.734 [INFO][4733] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.68/32] ContainerID="c6597bd5b30b19a93626ba95673fd81ae6461643bedc3c601682b99b37ebff2a" Namespace="calico-apiserver" Pod="calico-apiserver-8978b56b9-mstsf" WorkloadEndpoint="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--mstsf-eth0" Sep 13 00:05:52.857414 env[1920]: 2025-09-13 00:05:52.734 [INFO][4733] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali39934a37d19 ContainerID="c6597bd5b30b19a93626ba95673fd81ae6461643bedc3c601682b99b37ebff2a" Namespace="calico-apiserver" Pod="calico-apiserver-8978b56b9-mstsf" WorkloadEndpoint="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--mstsf-eth0" Sep 13 00:05:52.857414 env[1920]: 2025-09-13 00:05:52.817 [INFO][4733] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6597bd5b30b19a93626ba95673fd81ae6461643bedc3c601682b99b37ebff2a" Namespace="calico-apiserver" Pod="calico-apiserver-8978b56b9-mstsf" WorkloadEndpoint="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--mstsf-eth0" Sep 13 00:05:52.857414 env[1920]: 2025-09-13 00:05:52.818 [INFO][4733] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c6597bd5b30b19a93626ba95673fd81ae6461643bedc3c601682b99b37ebff2a" Namespace="calico-apiserver" Pod="calico-apiserver-8978b56b9-mstsf" WorkloadEndpoint="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--mstsf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--mstsf-eth0", GenerateName:"calico-apiserver-8978b56b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"527909b5-007b-4b06-af5d-92e50eb61126", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8978b56b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-1", ContainerID:"c6597bd5b30b19a93626ba95673fd81ae6461643bedc3c601682b99b37ebff2a", Pod:"calico-apiserver-8978b56b9-mstsf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali39934a37d19", MAC:"fe:26:98:1d:e1:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:52.857414 env[1920]: 2025-09-13 00:05:52.845 [INFO][4733] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c6597bd5b30b19a93626ba95673fd81ae6461643bedc3c601682b99b37ebff2a" Namespace="calico-apiserver" Pod="calico-apiserver-8978b56b9-mstsf" WorkloadEndpoint="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--mstsf-eth0" Sep 13 00:05:53.076000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.076000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.076000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.076000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.076000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.076000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.076000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.076000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.076000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.076000 audit: BPF prog-id=30 op=LOAD Sep 13 00:05:53.076000 audit[4859]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd71cc848 a2=40 a3=ffffd71cc878 items=0 ppid=4415 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:53.076000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:05:53.077000 audit: BPF prog-id=30 op=UNLOAD Sep 13 00:05:53.077000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.077000 audit[4859]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffd71cc960 a2=50 a3=0 items=0 ppid=4415 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:53.077000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:05:53.122000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.122000 audit[4859]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd71cc8b8 a2=28 a3=ffffd71cc9e8 items=0 ppid=4415 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:53.122000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:05:53.122000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.122000 audit[4859]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd71cc8e8 a2=28 a3=ffffd71cca18 items=0 ppid=4415 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:53.122000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:05:53.122000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.122000 audit[4859]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd71cc798 a2=28 a3=ffffd71cc8c8 items=0 ppid=4415 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:53.122000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:05:53.122000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.122000 audit[4859]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd71cc908 a2=28 a3=ffffd71cca38 items=0 ppid=4415 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:53.122000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:05:53.122000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.122000 audit[4859]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd71cc8e8 a2=28 a3=ffffd71cca18 items=0 ppid=4415 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:53.122000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:05:53.122000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.122000 audit[4859]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd71cc8d8 a2=28 a3=ffffd71cca08 items=0 ppid=4415 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:53.122000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:05:53.122000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.122000 audit[4859]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd71cc908 a2=28 a3=ffffd71cca38 items=0 ppid=4415 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:53.122000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:05:53.122000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.122000 audit[4859]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd71cc8e8 a2=28 a3=ffffd71cca18 items=0 ppid=4415 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:53.122000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:05:53.122000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.122000 audit[4859]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd71cc908 a2=28 a3=ffffd71cca38 items=0 ppid=4415 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:53.122000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:05:53.122000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.122000 audit[4859]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd71cc8d8 a2=28 a3=ffffd71cca08 items=0 ppid=4415 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:53.122000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:05:53.122000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.122000 audit[4859]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd71cc958 a2=28 a3=ffffd71cca98 items=0 ppid=4415 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:53.122000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:05:53.122000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.122000 audit[4859]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffd71cc690 a2=50 a3=0 items=0 ppid=4415 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:53.122000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:05:53.122000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.122000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.122000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.122000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.122000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.122000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.122000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.122000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.122000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.122000 audit: BPF prog-id=31 op=LOAD Sep 13 00:05:53.122000 audit[4859]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd71cc698 a2=94 a3=5 items=0 ppid=4415 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:53.122000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:05:53.123000 audit: BPF prog-id=31 op=UNLOAD Sep 13 00:05:53.123000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.123000 audit[4859]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffd71cc7a0 a2=50 a3=0 items=0 ppid=4415 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:53.123000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:05:53.123000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.123000 audit[4859]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffd71cc8e8 a2=4 a3=3 items=0 ppid=4415 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:53.123000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:05:53.123000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.123000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.123000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.123000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.123000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.123000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.123000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.123000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.123000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.123000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.123000 audit[4859]: AVC avc: denied { confidentiality } for pid=4859 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:05:53.123000 audit[4859]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd71cc8c8 a2=94 a3=6 items=0 ppid=4415 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:53.123000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:05:53.127000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.127000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.127000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.127000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.127000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.127000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.127000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.127000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.127000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.127000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.127000 audit[4859]: AVC avc: denied { confidentiality } for pid=4859 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:05:53.127000 audit[4859]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd71cc098 a2=94 a3=83 items=0 ppid=4415 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:53.127000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:05:53.127000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.127000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.127000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.127000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.127000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.127000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.127000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.127000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.127000 audit[4859]: AVC avc: denied { perfmon } for pid=4859 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.127000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.127000 audit[4859]: AVC avc: denied { confidentiality } for pid=4859 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:05:53.127000 audit[4859]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd71cc098 a2=94 a3=83 items=0 ppid=4415 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:53.127000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:05:53.128000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.128000 audit[4859]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffd71cdad8 a2=10 a3=ffffd71cdbc8 items=0 ppid=4415 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:53.128000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:05:53.128000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.128000 audit[4859]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffd71cd998 a2=10 a3=ffffd71cda88 items=0 ppid=4415 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:53.128000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:05:53.128000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.128000 audit[4859]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffd71cd908 a2=10 a3=ffffd71cda88 items=0 ppid=4415 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:53.128000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:05:53.128000 audit[4859]: AVC avc: denied { bpf } for pid=4859 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:05:53.128000 audit[4859]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffd71cd908 a2=10 a3=ffffd71cda88 items=0 ppid=4415 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:53.128000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:05:53.136000 audit: BPF prog-id=26 op=UNLOAD Sep 13 00:05:53.136000 audit[1599]: SYSCALL arch=c00000b7 syscall=35 success=no exit=-2 a0=ffffffffffffff9c a1=aaaae8bb2fd0 a2=0 a3=aaaae8b77010 items=1 ppid=1 pid=1599 auid=4294967295 uid=244 gid=244 euid=244 suid=244 fsuid=244 egid=244 sgid=244 fsgid=244 tty=(none) ses=4294967295 comm="systemd-network" exe="/usr/lib/systemd/systemd-networkd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:53.136000 audit: CWD cwd="/" Sep 13 00:05:53.136000 audit: PATH item=0 name="/run/systemd/netif/lldp/" inode=784 dev=00:18 mode=040755 ouid=244 ogid=244 rdev=00:00 obj=system_u:object_r:systemd_networkd_runtime_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:05:53.136000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-networkd" Sep 13 00:05:53.174252 systemd-networkd[1599]: cali46578ccb397: Link UP Sep 13 00:05:53.187525 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali46578ccb397: link becomes ready Sep 13 00:05:53.186136 systemd-networkd[1599]: cali46578ccb397: Gained carrier Sep 13 00:05:53.187820 env[1920]: time="2025-09-13T00:05:53.123473512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:53.187820 env[1920]: time="2025-09-13T00:05:53.123554533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:53.187820 env[1920]: time="2025-09-13T00:05:53.123594005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:53.187820 env[1920]: time="2025-09-13T00:05:53.133510782Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c6597bd5b30b19a93626ba95673fd81ae6461643bedc3c601682b99b37ebff2a pid=4939 runtime=io.containerd.runc.v2 Sep 13 00:05:53.247346 env[1920]: 2025-09-13 00:05:52.511 [INFO][4811] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--1-k8s-csi--node--driver--cdkj7-eth0 csi-node-driver- calico-system b0810e85-a772-4d04-a431-1cf9fa4bc8f8 988 0 2025-09-13 00:05:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-29-1 csi-node-driver-cdkj7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali46578ccb397 [] [] }} ContainerID="20f725260c00a448e75c55331e52e41c73477b7f310b154ae308b79c186f2e4a" Namespace="calico-system" Pod="csi-node-driver-cdkj7" WorkloadEndpoint="ip--172--31--29--1-k8s-csi--node--driver--cdkj7-" Sep 13 00:05:53.247346 env[1920]: 2025-09-13 00:05:52.511 [INFO][4811] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="20f725260c00a448e75c55331e52e41c73477b7f310b154ae308b79c186f2e4a" Namespace="calico-system" Pod="csi-node-driver-cdkj7" WorkloadEndpoint="ip--172--31--29--1-k8s-csi--node--driver--cdkj7-eth0" Sep 13 00:05:53.247346 env[1920]: 2025-09-13 00:05:52.929 [INFO][4883] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="20f725260c00a448e75c55331e52e41c73477b7f310b154ae308b79c186f2e4a" HandleID="k8s-pod-network.20f725260c00a448e75c55331e52e41c73477b7f310b154ae308b79c186f2e4a" Workload="ip--172--31--29--1-k8s-csi--node--driver--cdkj7-eth0" Sep 13 00:05:53.247346 env[1920]: 2025-09-13 00:05:52.930 [INFO][4883] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="20f725260c00a448e75c55331e52e41c73477b7f310b154ae308b79c186f2e4a" HandleID="k8s-pod-network.20f725260c00a448e75c55331e52e41c73477b7f310b154ae308b79c186f2e4a" Workload="ip--172--31--29--1-k8s-csi--node--driver--cdkj7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000602490), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-1", "pod":"csi-node-driver-cdkj7", "timestamp":"2025-09-13 00:05:52.92963702 +0000 UTC"}, Hostname:"ip-172-31-29-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:05:53.247346 env[1920]: 2025-09-13 00:05:52.930 [INFO][4883] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:53.247346 env[1920]: 2025-09-13 00:05:52.930 [INFO][4883] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:53.247346 env[1920]: 2025-09-13 00:05:52.930 [INFO][4883] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-1' Sep 13 00:05:53.247346 env[1920]: 2025-09-13 00:05:52.955 [INFO][4883] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.20f725260c00a448e75c55331e52e41c73477b7f310b154ae308b79c186f2e4a" host="ip-172-31-29-1" Sep 13 00:05:53.247346 env[1920]: 2025-09-13 00:05:52.975 [INFO][4883] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-1" Sep 13 00:05:53.247346 env[1920]: 2025-09-13 00:05:53.016 [INFO][4883] ipam/ipam.go 511: Trying affinity for 192.168.50.64/26 host="ip-172-31-29-1" Sep 13 00:05:53.247346 env[1920]: 2025-09-13 00:05:53.030 [INFO][4883] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.64/26 host="ip-172-31-29-1" Sep 13 00:05:53.247346 env[1920]: 2025-09-13 00:05:53.037 [INFO][4883] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.64/26 host="ip-172-31-29-1" Sep 13 00:05:53.247346 env[1920]: 2025-09-13 00:05:53.041 [INFO][4883] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.64/26 handle="k8s-pod-network.20f725260c00a448e75c55331e52e41c73477b7f310b154ae308b79c186f2e4a" host="ip-172-31-29-1" Sep 13 00:05:53.247346 env[1920]: 2025-09-13 00:05:53.052 [INFO][4883] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.20f725260c00a448e75c55331e52e41c73477b7f310b154ae308b79c186f2e4a Sep 13 00:05:53.247346 env[1920]: 2025-09-13 00:05:53.068 [INFO][4883] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.64/26 handle="k8s-pod-network.20f725260c00a448e75c55331e52e41c73477b7f310b154ae308b79c186f2e4a" host="ip-172-31-29-1" Sep 13 00:05:53.247346 env[1920]: 2025-09-13 00:05:53.093 [INFO][4883] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.69/26] block=192.168.50.64/26 handle="k8s-pod-network.20f725260c00a448e75c55331e52e41c73477b7f310b154ae308b79c186f2e4a" host="ip-172-31-29-1" Sep 13 00:05:53.247346 env[1920]: 2025-09-13 00:05:53.093 [INFO][4883] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.69/26] handle="k8s-pod-network.20f725260c00a448e75c55331e52e41c73477b7f310b154ae308b79c186f2e4a" host="ip-172-31-29-1" Sep 13 00:05:53.247346 env[1920]: 2025-09-13 00:05:53.094 [INFO][4883] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:53.247346 env[1920]: 2025-09-13 00:05:53.094 [INFO][4883] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.69/26] IPv6=[] ContainerID="20f725260c00a448e75c55331e52e41c73477b7f310b154ae308b79c186f2e4a" HandleID="k8s-pod-network.20f725260c00a448e75c55331e52e41c73477b7f310b154ae308b79c186f2e4a" Workload="ip--172--31--29--1-k8s-csi--node--driver--cdkj7-eth0" Sep 13 00:05:53.248627 env[1920]: 2025-09-13 00:05:53.120 [INFO][4811] cni-plugin/k8s.go 418: Populated endpoint ContainerID="20f725260c00a448e75c55331e52e41c73477b7f310b154ae308b79c186f2e4a" Namespace="calico-system" Pod="csi-node-driver-cdkj7" WorkloadEndpoint="ip--172--31--29--1-k8s-csi--node--driver--cdkj7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--1-k8s-csi--node--driver--cdkj7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b0810e85-a772-4d04-a431-1cf9fa4bc8f8", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-1", ContainerID:"", Pod:"csi-node-driver-cdkj7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali46578ccb397", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:53.248627 env[1920]: 2025-09-13 00:05:53.148 [INFO][4811] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.69/32] ContainerID="20f725260c00a448e75c55331e52e41c73477b7f310b154ae308b79c186f2e4a" Namespace="calico-system" Pod="csi-node-driver-cdkj7" WorkloadEndpoint="ip--172--31--29--1-k8s-csi--node--driver--cdkj7-eth0" Sep 13 00:05:53.248627 env[1920]: 2025-09-13 00:05:53.148 [INFO][4811] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali46578ccb397 ContainerID="20f725260c00a448e75c55331e52e41c73477b7f310b154ae308b79c186f2e4a" Namespace="calico-system" Pod="csi-node-driver-cdkj7" WorkloadEndpoint="ip--172--31--29--1-k8s-csi--node--driver--cdkj7-eth0" Sep 13 00:05:53.248627 env[1920]: 2025-09-13 00:05:53.205 [INFO][4811] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="20f725260c00a448e75c55331e52e41c73477b7f310b154ae308b79c186f2e4a" Namespace="calico-system" Pod="csi-node-driver-cdkj7" WorkloadEndpoint="ip--172--31--29--1-k8s-csi--node--driver--cdkj7-eth0" Sep 13 00:05:53.248627 env[1920]: 2025-09-13 00:05:53.213 [INFO][4811] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="20f725260c00a448e75c55331e52e41c73477b7f310b154ae308b79c186f2e4a" Namespace="calico-system" Pod="csi-node-driver-cdkj7" WorkloadEndpoint="ip--172--31--29--1-k8s-csi--node--driver--cdkj7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--1-k8s-csi--node--driver--cdkj7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b0810e85-a772-4d04-a431-1cf9fa4bc8f8", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-1", ContainerID:"20f725260c00a448e75c55331e52e41c73477b7f310b154ae308b79c186f2e4a", Pod:"csi-node-driver-cdkj7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali46578ccb397", MAC:"2a:79:9f:f3:af:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:53.248627 env[1920]: 2025-09-13 00:05:53.241 [INFO][4811] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="20f725260c00a448e75c55331e52e41c73477b7f310b154ae308b79c186f2e4a" Namespace="calico-system" Pod="csi-node-driver-cdkj7" WorkloadEndpoint="ip--172--31--29--1-k8s-csi--node--driver--cdkj7-eth0" Sep 13 00:05:53.351306 env[1920]: time="2025-09-13T00:05:53.351159277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8978b56b9-tpkzf,Uid:d0a25691-16ce-4727-a17e-3adde345074c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"894b43e3dc548a07ab30eeca8bab3d67cdf388423ef8ef0ea5c3b88f1acd13b6\"" Sep 13 00:05:53.503197 env[1920]: time="2025-09-13T00:05:53.502980749Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:53.503197 env[1920]: time="2025-09-13T00:05:53.503072931Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:53.503197 env[1920]: time="2025-09-13T00:05:53.503132601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:53.504528 env[1920]: time="2025-09-13T00:05:53.504373877Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/20f725260c00a448e75c55331e52e41c73477b7f310b154ae308b79c186f2e4a pid=5011 runtime=io.containerd.runc.v2 Sep 13 00:05:53.541000 audit[5028]: NETFILTER_CFG table=mangle:105 family=2 entries=16 op=nft_register_chain pid=5028 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:05:53.541000 audit[5028]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffe2506ec0 a2=0 a3=ffff8389cfa8 items=0 ppid=4415 pid=5028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:53.541000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:05:53.563000 audit[5035]: NETFILTER_CFG table=nat:106 family=2 entries=15 op=nft_register_chain pid=5035 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:05:53.563000 audit[5035]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=fffffbdf5a00 a2=0 a3=ffffbc34dfa8 items=0 ppid=4415 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:53.563000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:05:53.611000 audit[5032]: NETFILTER_CFG table=raw:107 family=2 entries=21 op=nft_register_chain pid=5032 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:05:53.611000 audit[5032]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffcd1d4050 a2=0 a3=ffffb511efa8 items=0 ppid=4415 pid=5032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:53.616136 env[1920]: time="2025-09-13T00:05:53.616035096Z" level=info msg="StopPodSandbox for \"b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00\"" Sep 13 00:05:53.611000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:05:53.773249 systemd-networkd[1599]: cali857449cc2f9: Link UP Sep 13 00:05:53.781102 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali857449cc2f9: link becomes ready Sep 13 00:05:53.781438 systemd-networkd[1599]: cali6d9fff504e4: Gained IPv6LL Sep 13 00:05:53.782041 systemd-networkd[1599]: cali857449cc2f9: Gained carrier Sep 13 00:05:53.904437 env[1920]: 2025-09-13 00:05:52.921 [INFO][4849] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--1-k8s-coredns--7c65d6cfc9--fkdwg-eth0 coredns-7c65d6cfc9- kube-system 42009314-cb57-4945-ba05-c28efb80272b 989 0 2025-09-13 00:04:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-1 coredns-7c65d6cfc9-fkdwg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali857449cc2f9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7ed572b47446112f184bc5bc46147fe7e895e7e7aa9c784208191688f9526442" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fkdwg" WorkloadEndpoint="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--fkdwg-" Sep 13 00:05:53.904437 env[1920]: 2025-09-13 00:05:52.921 [INFO][4849] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7ed572b47446112f184bc5bc46147fe7e895e7e7aa9c784208191688f9526442" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fkdwg" WorkloadEndpoint="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--fkdwg-eth0" Sep 13 00:05:53.904437 env[1920]: 2025-09-13 00:05:53.564 [INFO][4929] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ed572b47446112f184bc5bc46147fe7e895e7e7aa9c784208191688f9526442" HandleID="k8s-pod-network.7ed572b47446112f184bc5bc46147fe7e895e7e7aa9c784208191688f9526442" Workload="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--fkdwg-eth0" Sep 13 00:05:53.904437 env[1920]: 2025-09-13 00:05:53.564 [INFO][4929] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7ed572b47446112f184bc5bc46147fe7e895e7e7aa9c784208191688f9526442" HandleID="k8s-pod-network.7ed572b47446112f184bc5bc46147fe7e895e7e7aa9c784208191688f9526442" Workload="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--fkdwg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003227e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-1", "pod":"coredns-7c65d6cfc9-fkdwg", "timestamp":"2025-09-13 00:05:53.56417941 +0000 UTC"}, Hostname:"ip-172-31-29-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:05:53.904437 env[1920]: 2025-09-13 00:05:53.564 [INFO][4929] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:53.904437 env[1920]: 2025-09-13 00:05:53.564 [INFO][4929] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:53.904437 env[1920]: 2025-09-13 00:05:53.564 [INFO][4929] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-1' Sep 13 00:05:53.904437 env[1920]: 2025-09-13 00:05:53.587 [INFO][4929] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7ed572b47446112f184bc5bc46147fe7e895e7e7aa9c784208191688f9526442" host="ip-172-31-29-1" Sep 13 00:05:53.904437 env[1920]: 2025-09-13 00:05:53.610 [INFO][4929] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-1" Sep 13 00:05:53.904437 env[1920]: 2025-09-13 00:05:53.648 [INFO][4929] ipam/ipam.go 511: Trying affinity for 192.168.50.64/26 host="ip-172-31-29-1" Sep 13 00:05:53.904437 env[1920]: 2025-09-13 00:05:53.666 [INFO][4929] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.64/26 host="ip-172-31-29-1" Sep 13 00:05:53.904437 env[1920]: 2025-09-13 00:05:53.672 [INFO][4929] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.64/26 host="ip-172-31-29-1" Sep 13 00:05:53.904437 env[1920]: 2025-09-13 00:05:53.672 [INFO][4929] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.64/26 handle="k8s-pod-network.7ed572b47446112f184bc5bc46147fe7e895e7e7aa9c784208191688f9526442" host="ip-172-31-29-1" Sep 13 00:05:53.904437 env[1920]: 2025-09-13 00:05:53.682 [INFO][4929] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7ed572b47446112f184bc5bc46147fe7e895e7e7aa9c784208191688f9526442 Sep 13 00:05:53.904437 env[1920]: 2025-09-13 00:05:53.705 [INFO][4929] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.64/26 handle="k8s-pod-network.7ed572b47446112f184bc5bc46147fe7e895e7e7aa9c784208191688f9526442" host="ip-172-31-29-1" Sep 13 00:05:53.904437 env[1920]: 2025-09-13 00:05:53.741 [INFO][4929] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.70/26] block=192.168.50.64/26 handle="k8s-pod-network.7ed572b47446112f184bc5bc46147fe7e895e7e7aa9c784208191688f9526442" host="ip-172-31-29-1" Sep 13 00:05:53.904437 env[1920]: 2025-09-13 00:05:53.741 [INFO][4929] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.70/26] handle="k8s-pod-network.7ed572b47446112f184bc5bc46147fe7e895e7e7aa9c784208191688f9526442" host="ip-172-31-29-1" Sep 13 00:05:53.904437 env[1920]: 2025-09-13 00:05:53.741 [INFO][4929] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:53.904437 env[1920]: 2025-09-13 00:05:53.741 [INFO][4929] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.70/26] IPv6=[] ContainerID="7ed572b47446112f184bc5bc46147fe7e895e7e7aa9c784208191688f9526442" HandleID="k8s-pod-network.7ed572b47446112f184bc5bc46147fe7e895e7e7aa9c784208191688f9526442" Workload="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--fkdwg-eth0" Sep 13 00:05:53.906209 env[1920]: 2025-09-13 00:05:53.753 [INFO][4849] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7ed572b47446112f184bc5bc46147fe7e895e7e7aa9c784208191688f9526442" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fkdwg" WorkloadEndpoint="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--fkdwg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--1-k8s-coredns--7c65d6cfc9--fkdwg-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"42009314-cb57-4945-ba05-c28efb80272b", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 4, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-1", ContainerID:"", Pod:"coredns-7c65d6cfc9-fkdwg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali857449cc2f9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:53.906209 env[1920]: 2025-09-13 00:05:53.753 [INFO][4849] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.70/32] ContainerID="7ed572b47446112f184bc5bc46147fe7e895e7e7aa9c784208191688f9526442" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fkdwg" WorkloadEndpoint="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--fkdwg-eth0" Sep 13 00:05:53.906209 env[1920]: 2025-09-13 00:05:53.753 [INFO][4849] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali857449cc2f9 ContainerID="7ed572b47446112f184bc5bc46147fe7e895e7e7aa9c784208191688f9526442" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fkdwg" WorkloadEndpoint="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--fkdwg-eth0" Sep 13 00:05:53.906209 env[1920]: 2025-09-13 00:05:53.780 [INFO][4849] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ed572b47446112f184bc5bc46147fe7e895e7e7aa9c784208191688f9526442" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fkdwg" WorkloadEndpoint="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--fkdwg-eth0" Sep 13 00:05:53.906209 env[1920]: 2025-09-13 00:05:53.806 [INFO][4849] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7ed572b47446112f184bc5bc46147fe7e895e7e7aa9c784208191688f9526442" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fkdwg" WorkloadEndpoint="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--fkdwg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--1-k8s-coredns--7c65d6cfc9--fkdwg-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"42009314-cb57-4945-ba05-c28efb80272b", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 4, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-1", ContainerID:"7ed572b47446112f184bc5bc46147fe7e895e7e7aa9c784208191688f9526442", Pod:"coredns-7c65d6cfc9-fkdwg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali857449cc2f9", MAC:"a6:4b:7d:39:3b:e6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:53.906209 env[1920]: 2025-09-13 00:05:53.877 [INFO][4849] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7ed572b47446112f184bc5bc46147fe7e895e7e7aa9c784208191688f9526442" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fkdwg" WorkloadEndpoint="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--fkdwg-eth0" Sep 13 00:05:53.931832 systemd[1]: run-containerd-runc-k8s.io-20f725260c00a448e75c55331e52e41c73477b7f310b154ae308b79c186f2e4a-runc.2rAwqR.mount: Deactivated successfully. Sep 13 00:05:53.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.29.1:22-139.178.89.65:35462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:53.957724 systemd[1]: Started sshd@7-172.31.29.1:22-139.178.89.65:35462.service. Sep 13 00:05:53.962241 env[1920]: 2025-09-13 00:05:53.385 [INFO][4906] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" Sep 13 00:05:53.962241 env[1920]: 2025-09-13 00:05:53.387 [INFO][4906] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" iface="eth0" netns="/var/run/netns/cni-079710e3-9a7f-71e6-c625-2e4a531c08f5" Sep 13 00:05:53.962241 env[1920]: 2025-09-13 00:05:53.388 [INFO][4906] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" iface="eth0" netns="/var/run/netns/cni-079710e3-9a7f-71e6-c625-2e4a531c08f5" Sep 13 00:05:53.962241 env[1920]: 2025-09-13 00:05:53.389 [INFO][4906] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" iface="eth0" netns="/var/run/netns/cni-079710e3-9a7f-71e6-c625-2e4a531c08f5" Sep 13 00:05:53.962241 env[1920]: 2025-09-13 00:05:53.389 [INFO][4906] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" Sep 13 00:05:53.962241 env[1920]: 2025-09-13 00:05:53.390 [INFO][4906] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" Sep 13 00:05:53.962241 env[1920]: 2025-09-13 00:05:53.742 [INFO][5004] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" HandleID="k8s-pod-network.dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" Workload="ip--172--31--29--1-k8s-goldmane--7988f88666--6v2qn-eth0" Sep 13 00:05:53.962241 env[1920]: 2025-09-13 00:05:53.806 [INFO][5004] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:53.962241 env[1920]: 2025-09-13 00:05:53.806 [INFO][5004] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:53.962241 env[1920]: 2025-09-13 00:05:53.877 [WARNING][5004] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" HandleID="k8s-pod-network.dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" Workload="ip--172--31--29--1-k8s-goldmane--7988f88666--6v2qn-eth0" Sep 13 00:05:53.962241 env[1920]: 2025-09-13 00:05:53.877 [INFO][5004] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" HandleID="k8s-pod-network.dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" Workload="ip--172--31--29--1-k8s-goldmane--7988f88666--6v2qn-eth0" Sep 13 00:05:53.962241 env[1920]: 2025-09-13 00:05:53.894 [INFO][5004] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:53.962241 env[1920]: 2025-09-13 00:05:53.909 [INFO][4906] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" Sep 13 00:05:53.984428 systemd[1]: run-netns-cni\x2d079710e3\x2d9a7f\x2d71e6\x2dc625\x2d2e4a531c08f5.mount: Deactivated successfully. Sep 13 00:05:54.010274 env[1920]: time="2025-09-13T00:05:54.010182724Z" level=info msg="TearDown network for sandbox \"dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b\" successfully" Sep 13 00:05:54.010958 env[1920]: time="2025-09-13T00:05:54.010877893Z" level=info msg="StopPodSandbox for \"dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b\" returns successfully" Sep 13 00:05:54.023945 env[1920]: time="2025-09-13T00:05:54.023879709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-6v2qn,Uid:dce9497b-6716-4542-9084-aaea79149a75,Namespace:calico-system,Attempt:1,}" Sep 13 00:05:53.664000 audit[5051]: NETFILTER_CFG table=filter:108 family=2 entries=128 op=nft_register_chain pid=5051 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:05:53.664000 audit[5051]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=72768 a0=3 a1=ffffcc3c9820 a2=0 a3=ffffb9983fa8 items=0 ppid=4415 pid=5051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:53.664000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:05:54.131101 env[1920]: time="2025-09-13T00:05:54.131030136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8978b56b9-mstsf,Uid:527909b5-007b-4b06-af5d-92e50eb61126,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c6597bd5b30b19a93626ba95673fd81ae6461643bedc3c601682b99b37ebff2a\"" Sep 13 00:05:54.139000 audit[5102]: NETFILTER_CFG table=filter:109 family=2 entries=141 op=nft_register_chain pid=5102 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:05:54.139000 audit[5102]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=82980 a0=3 a1=ffffd049cdd0 a2=0 a3=ffffbcdc7fa8 items=0 ppid=4415 pid=5102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:54.139000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:05:54.218725 systemd-networkd[1599]: vxlan.calico: Gained IPv6LL Sep 13 00:05:54.225000 audit[5071]: USER_ACCT pid=5071 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:05:54.227731 sshd[5071]: Accepted publickey for core from 139.178.89.65 port 35462 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:05:54.229000 audit[5071]: CRED_ACQ pid=5071 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:05:54.229000 audit[5071]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd827f3c0 a2=3 a3=1 items=0 ppid=1 pid=5071 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:54.229000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:05:54.231460 sshd[5071]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:05:54.242507 systemd-logind[1911]: New session 8 of user core. Sep 13 00:05:54.244502 systemd[1]: Started session-8.scope. Sep 13 00:05:54.246887 env[1920]: time="2025-09-13T00:05:54.246824922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cdkj7,Uid:b0810e85-a772-4d04-a431-1cf9fa4bc8f8,Namespace:calico-system,Attempt:1,} returns sandbox id \"20f725260c00a448e75c55331e52e41c73477b7f310b154ae308b79c186f2e4a\"" Sep 13 00:05:54.260000 audit[5071]: USER_START pid=5071 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:05:54.263000 audit[5136]: CRED_ACQ pid=5136 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:05:54.272507 env[1920]: time="2025-09-13T00:05:54.272218724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:54.272507 env[1920]: time="2025-09-13T00:05:54.272304352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:54.272507 env[1920]: time="2025-09-13T00:05:54.272331883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:54.277755 env[1920]: time="2025-09-13T00:05:54.277190642Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ed572b47446112f184bc5bc46147fe7e895e7e7aa9c784208191688f9526442 pid=5115 runtime=io.containerd.runc.v2 Sep 13 00:05:54.281993 systemd-networkd[1599]: cali39934a37d19: Gained IPv6LL Sep 13 00:05:54.594832 env[1920]: time="2025-09-13T00:05:54.592566998Z" level=info msg="StopPodSandbox for \"0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0\"" Sep 13 00:05:54.709904 env[1920]: time="2025-09-13T00:05:54.699536591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fkdwg,Uid:42009314-cb57-4945-ba05-c28efb80272b,Namespace:kube-system,Attempt:1,} returns sandbox id \"7ed572b47446112f184bc5bc46147fe7e895e7e7aa9c784208191688f9526442\"" Sep 13 00:05:54.709904 env[1920]: time="2025-09-13T00:05:54.704827921Z" level=info msg="CreateContainer within sandbox \"7ed572b47446112f184bc5bc46147fe7e895e7e7aa9c784208191688f9526442\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:05:54.759275 env[1920]: 2025-09-13 00:05:54.307 [INFO][5065] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" Sep 13 00:05:54.759275 env[1920]: 2025-09-13 00:05:54.307 [INFO][5065] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" iface="eth0" netns="/var/run/netns/cni-e7a6fafc-c383-8210-6c3f-0fbbdbcdc299" Sep 13 00:05:54.759275 env[1920]: 2025-09-13 00:05:54.308 [INFO][5065] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" iface="eth0" netns="/var/run/netns/cni-e7a6fafc-c383-8210-6c3f-0fbbdbcdc299" Sep 13 00:05:54.759275 env[1920]: 2025-09-13 00:05:54.309 [INFO][5065] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" iface="eth0" netns="/var/run/netns/cni-e7a6fafc-c383-8210-6c3f-0fbbdbcdc299" Sep 13 00:05:54.759275 env[1920]: 2025-09-13 00:05:54.309 [INFO][5065] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" Sep 13 00:05:54.759275 env[1920]: 2025-09-13 00:05:54.309 [INFO][5065] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" Sep 13 00:05:54.759275 env[1920]: 2025-09-13 00:05:54.662 [INFO][5142] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" HandleID="k8s-pod-network.b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" Workload="ip--172--31--29--1-k8s-calico--kube--controllers--6b97f6f77f--zv7t8-eth0" Sep 13 00:05:54.759275 env[1920]: 2025-09-13 00:05:54.665 [INFO][5142] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:54.759275 env[1920]: 2025-09-13 00:05:54.666 [INFO][5142] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:54.759275 env[1920]: 2025-09-13 00:05:54.731 [WARNING][5142] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" HandleID="k8s-pod-network.b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" Workload="ip--172--31--29--1-k8s-calico--kube--controllers--6b97f6f77f--zv7t8-eth0" Sep 13 00:05:54.759275 env[1920]: 2025-09-13 00:05:54.731 [INFO][5142] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" HandleID="k8s-pod-network.b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" Workload="ip--172--31--29--1-k8s-calico--kube--controllers--6b97f6f77f--zv7t8-eth0" Sep 13 00:05:54.759275 env[1920]: 2025-09-13 00:05:54.734 [INFO][5142] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:54.759275 env[1920]: 2025-09-13 00:05:54.742 [INFO][5065] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" Sep 13 00:05:54.771752 systemd[1]: run-netns-cni\x2de7a6fafc\x2dc383\x2d8210\x2d6c3f\x2d0fbbdbcdc299.mount: Deactivated successfully. Sep 13 00:05:54.779478 env[1920]: time="2025-09-13T00:05:54.778021976Z" level=info msg="TearDown network for sandbox \"b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00\" successfully" Sep 13 00:05:54.779646 env[1920]: time="2025-09-13T00:05:54.779454007Z" level=info msg="StopPodSandbox for \"b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00\" returns successfully" Sep 13 00:05:54.790083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount704352950.mount: Deactivated successfully. Sep 13 00:05:54.818204 env[1920]: time="2025-09-13T00:05:54.817202220Z" level=info msg="CreateContainer within sandbox \"7ed572b47446112f184bc5bc46147fe7e895e7e7aa9c784208191688f9526442\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0ed30bc2b66393eeeb37774e13a63dfadde2c124c841f6ebfca34a6f900c399b\"" Sep 13 00:05:54.823568 env[1920]: time="2025-09-13T00:05:54.823498315Z" level=info msg="StartContainer for \"0ed30bc2b66393eeeb37774e13a63dfadde2c124c841f6ebfca34a6f900c399b\"" Sep 13 00:05:54.838353 env[1920]: time="2025-09-13T00:05:54.838259711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b97f6f77f-zv7t8,Uid:5015b2dd-c6f7-46e4-9af1-65597994d2b9,Namespace:calico-system,Attempt:1,}" Sep 13 00:05:54.847361 sshd[5071]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:54.868648 kernel: kauditd_printk_skb: 574 callbacks suppressed Sep 13 00:05:54.868850 kernel: audit: type=1106 audit(1757721954.850:415): pid=5071 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:05:54.850000 audit[5071]: USER_END pid=5071 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:05:54.857104 systemd-networkd[1599]: cali46578ccb397: Gained IPv6LL Sep 13 00:05:54.867104 systemd[1]: sshd@7-172.31.29.1:22-139.178.89.65:35462.service: Deactivated successfully. Sep 13 00:05:54.869940 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:05:54.869984 systemd-logind[1911]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:05:54.873785 systemd-logind[1911]: Removed session 8. Sep 13 00:05:54.888174 kernel: audit: type=1104 audit(1757721954.850:416): pid=5071 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:05:54.850000 audit[5071]: CRED_DISP pid=5071 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:05:54.905420 kernel: audit: type=1131 audit(1757721954.866:417): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.29.1:22-139.178.89.65:35462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:54.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.29.1:22-139.178.89.65:35462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:54.924503 env[1920]: time="2025-09-13T00:05:54.924346394Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:54.935968 env[1920]: time="2025-09-13T00:05:54.935791594Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:54.970619 env[1920]: time="2025-09-13T00:05:54.970565656Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:54.980602 env[1920]: time="2025-09-13T00:05:54.980539856Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:54.981820 env[1920]: time="2025-09-13T00:05:54.981738477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 13 00:05:54.991848 env[1920]: time="2025-09-13T00:05:54.991756901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:05:55.001054 env[1920]: time="2025-09-13T00:05:55.000357443Z" level=info msg="CreateContainer within sandbox \"ff48d1fadd4a3e58e0bdebb1872e609b966d2148fb4b04c0efc894f4d0161838\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:05:55.091831 env[1920]: time="2025-09-13T00:05:55.091722684Z" level=info msg="CreateContainer within sandbox \"ff48d1fadd4a3e58e0bdebb1872e609b966d2148fb4b04c0efc894f4d0161838\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"1b9eee130186f7e3b0c71ed99a66a91ff9f3ef58f8c7a8ad38333d54b6599c05\"" Sep 13 00:05:55.097205 env[1920]: time="2025-09-13T00:05:55.097142113Z" level=info msg="StartContainer for \"1b9eee130186f7e3b0c71ed99a66a91ff9f3ef58f8c7a8ad38333d54b6599c05\"" Sep 13 00:05:55.123764 systemd-networkd[1599]: calidb8036257aa: Link UP Sep 13 00:05:55.143682 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:05:55.143860 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calidb8036257aa: link becomes ready Sep 13 00:05:55.143861 systemd-networkd[1599]: calidb8036257aa: Gained carrier Sep 13 00:05:55.178988 systemd-networkd[1599]: cali857449cc2f9: Gained IPv6LL Sep 13 00:05:55.281060 env[1920]: 2025-09-13 00:05:54.635 [INFO][5109] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--1-k8s-goldmane--7988f88666--6v2qn-eth0 goldmane-7988f88666- calico-system dce9497b-6716-4542-9084-aaea79149a75 1006 0 2025-09-13 00:05:23 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-29-1 goldmane-7988f88666-6v2qn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calidb8036257aa [] [] }} ContainerID="5a3336a46a17f3b65ca66b0f81401f8f5c199a9311e56535c600ff6a80093d45" Namespace="calico-system" Pod="goldmane-7988f88666-6v2qn" WorkloadEndpoint="ip--172--31--29--1-k8s-goldmane--7988f88666--6v2qn-" Sep 13 00:05:55.281060 env[1920]: 2025-09-13 00:05:54.635 [INFO][5109] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5a3336a46a17f3b65ca66b0f81401f8f5c199a9311e56535c600ff6a80093d45" Namespace="calico-system" Pod="goldmane-7988f88666-6v2qn" WorkloadEndpoint="ip--172--31--29--1-k8s-goldmane--7988f88666--6v2qn-eth0" Sep 13 00:05:55.281060 env[1920]: 2025-09-13 00:05:54.968 [INFO][5191] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5a3336a46a17f3b65ca66b0f81401f8f5c199a9311e56535c600ff6a80093d45" HandleID="k8s-pod-network.5a3336a46a17f3b65ca66b0f81401f8f5c199a9311e56535c600ff6a80093d45" Workload="ip--172--31--29--1-k8s-goldmane--7988f88666--6v2qn-eth0" Sep 13 00:05:55.281060 env[1920]: 2025-09-13 00:05:54.969 [INFO][5191] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5a3336a46a17f3b65ca66b0f81401f8f5c199a9311e56535c600ff6a80093d45" HandleID="k8s-pod-network.5a3336a46a17f3b65ca66b0f81401f8f5c199a9311e56535c600ff6a80093d45" Workload="ip--172--31--29--1-k8s-goldmane--7988f88666--6v2qn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000121aa0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-1", "pod":"goldmane-7988f88666-6v2qn", "timestamp":"2025-09-13 00:05:54.968941257 +0000 UTC"}, Hostname:"ip-172-31-29-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:05:55.281060 env[1920]: 2025-09-13 00:05:54.969 [INFO][5191] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:55.281060 env[1920]: 2025-09-13 00:05:54.969 [INFO][5191] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:55.281060 env[1920]: 2025-09-13 00:05:54.969 [INFO][5191] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-1' Sep 13 00:05:55.281060 env[1920]: 2025-09-13 00:05:54.999 [INFO][5191] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5a3336a46a17f3b65ca66b0f81401f8f5c199a9311e56535c600ff6a80093d45" host="ip-172-31-29-1" Sep 13 00:05:55.281060 env[1920]: 2025-09-13 00:05:55.010 [INFO][5191] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-1" Sep 13 00:05:55.281060 env[1920]: 2025-09-13 00:05:55.019 [INFO][5191] ipam/ipam.go 511: Trying affinity for 192.168.50.64/26 host="ip-172-31-29-1" Sep 13 00:05:55.281060 env[1920]: 2025-09-13 00:05:55.023 [INFO][5191] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.64/26 host="ip-172-31-29-1" Sep 13 00:05:55.281060 env[1920]: 2025-09-13 00:05:55.029 [INFO][5191] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.64/26 host="ip-172-31-29-1" Sep 13 00:05:55.281060 env[1920]: 2025-09-13 00:05:55.029 [INFO][5191] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.64/26 handle="k8s-pod-network.5a3336a46a17f3b65ca66b0f81401f8f5c199a9311e56535c600ff6a80093d45" host="ip-172-31-29-1" Sep 13 00:05:55.281060 env[1920]: 2025-09-13 00:05:55.033 [INFO][5191] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5a3336a46a17f3b65ca66b0f81401f8f5c199a9311e56535c600ff6a80093d45 Sep 13 00:05:55.281060 env[1920]: 2025-09-13 00:05:55.041 [INFO][5191] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.64/26 handle="k8s-pod-network.5a3336a46a17f3b65ca66b0f81401f8f5c199a9311e56535c600ff6a80093d45" host="ip-172-31-29-1" Sep 13 00:05:55.281060 env[1920]: 2025-09-13 00:05:55.064 [INFO][5191] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.71/26] block=192.168.50.64/26 handle="k8s-pod-network.5a3336a46a17f3b65ca66b0f81401f8f5c199a9311e56535c600ff6a80093d45" host="ip-172-31-29-1" Sep 13 00:05:55.281060 env[1920]: 2025-09-13 00:05:55.065 [INFO][5191] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.71/26] handle="k8s-pod-network.5a3336a46a17f3b65ca66b0f81401f8f5c199a9311e56535c600ff6a80093d45" host="ip-172-31-29-1" Sep 13 00:05:55.281060 env[1920]: 2025-09-13 00:05:55.065 [INFO][5191] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:55.281060 env[1920]: 2025-09-13 00:05:55.065 [INFO][5191] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.71/26] IPv6=[] ContainerID="5a3336a46a17f3b65ca66b0f81401f8f5c199a9311e56535c600ff6a80093d45" HandleID="k8s-pod-network.5a3336a46a17f3b65ca66b0f81401f8f5c199a9311e56535c600ff6a80093d45" Workload="ip--172--31--29--1-k8s-goldmane--7988f88666--6v2qn-eth0" Sep 13 00:05:55.282503 env[1920]: 2025-09-13 00:05:55.074 [INFO][5109] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5a3336a46a17f3b65ca66b0f81401f8f5c199a9311e56535c600ff6a80093d45" Namespace="calico-system" Pod="goldmane-7988f88666-6v2qn" WorkloadEndpoint="ip--172--31--29--1-k8s-goldmane--7988f88666--6v2qn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--1-k8s-goldmane--7988f88666--6v2qn-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"dce9497b-6716-4542-9084-aaea79149a75", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-1", ContainerID:"", Pod:"goldmane-7988f88666-6v2qn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.50.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidb8036257aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:55.282503 env[1920]: 2025-09-13 00:05:55.075 [INFO][5109] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.71/32] ContainerID="5a3336a46a17f3b65ca66b0f81401f8f5c199a9311e56535c600ff6a80093d45" Namespace="calico-system" Pod="goldmane-7988f88666-6v2qn" WorkloadEndpoint="ip--172--31--29--1-k8s-goldmane--7988f88666--6v2qn-eth0" Sep 13 00:05:55.282503 env[1920]: 2025-09-13 00:05:55.075 [INFO][5109] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb8036257aa ContainerID="5a3336a46a17f3b65ca66b0f81401f8f5c199a9311e56535c600ff6a80093d45" Namespace="calico-system" Pod="goldmane-7988f88666-6v2qn" WorkloadEndpoint="ip--172--31--29--1-k8s-goldmane--7988f88666--6v2qn-eth0" Sep 13 00:05:55.282503 env[1920]: 2025-09-13 00:05:55.146 [INFO][5109] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5a3336a46a17f3b65ca66b0f81401f8f5c199a9311e56535c600ff6a80093d45" Namespace="calico-system" Pod="goldmane-7988f88666-6v2qn" WorkloadEndpoint="ip--172--31--29--1-k8s-goldmane--7988f88666--6v2qn-eth0" Sep 13 00:05:55.282503 env[1920]: 2025-09-13 00:05:55.174 [INFO][5109] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5a3336a46a17f3b65ca66b0f81401f8f5c199a9311e56535c600ff6a80093d45" Namespace="calico-system" Pod="goldmane-7988f88666-6v2qn" WorkloadEndpoint="ip--172--31--29--1-k8s-goldmane--7988f88666--6v2qn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--1-k8s-goldmane--7988f88666--6v2qn-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"dce9497b-6716-4542-9084-aaea79149a75", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-1", ContainerID:"5a3336a46a17f3b65ca66b0f81401f8f5c199a9311e56535c600ff6a80093d45", Pod:"goldmane-7988f88666-6v2qn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.50.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidb8036257aa", MAC:"6e:f8:11:02:2d:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:55.282503 env[1920]: 2025-09-13 00:05:55.208 [INFO][5109] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5a3336a46a17f3b65ca66b0f81401f8f5c199a9311e56535c600ff6a80093d45" Namespace="calico-system" Pod="goldmane-7988f88666-6v2qn" WorkloadEndpoint="ip--172--31--29--1-k8s-goldmane--7988f88666--6v2qn-eth0" Sep 13 00:05:55.337260 env[1920]: time="2025-09-13T00:05:55.337149215Z" level=info msg="StartContainer for \"0ed30bc2b66393eeeb37774e13a63dfadde2c124c841f6ebfca34a6f900c399b\" returns successfully" Sep 13 00:05:55.447389 kernel: audit: type=1325 audit(1757721955.355:418): table=filter:110 family=2 entries=60 op=nft_register_chain pid=5274 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:05:55.447555 kernel: audit: type=1300 audit(1757721955.355:418): arch=c00000b7 syscall=211 success=yes exit=29916 a0=3 a1=fffff9339070 a2=0 a3=ffffa5647fa8 items=0 ppid=4415 pid=5274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:55.355000 audit[5274]: NETFILTER_CFG table=filter:110 family=2 entries=60 op=nft_register_chain pid=5274 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:05:55.355000 audit[5274]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29916 a0=3 a1=fffff9339070 a2=0 a3=ffffa5647fa8 items=0 ppid=4415 pid=5274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:55.355000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:05:55.471352 kernel: audit: type=1327 audit(1757721955.355:418): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:05:55.592622 env[1920]: time="2025-09-13T00:05:55.573810294Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:55.592622 env[1920]: time="2025-09-13T00:05:55.573920741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:55.592622 env[1920]: time="2025-09-13T00:05:55.573949076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:55.592622 env[1920]: time="2025-09-13T00:05:55.574322444Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a3336a46a17f3b65ca66b0f81401f8f5c199a9311e56535c600ff6a80093d45 pid=5300 runtime=io.containerd.runc.v2 Sep 13 00:05:55.647098 env[1920]: 2025-09-13 00:05:54.967 [WARNING][5193] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--tpkzf-eth0", GenerateName:"calico-apiserver-8978b56b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"d0a25691-16ce-4727-a17e-3adde345074c", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8978b56b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-1", ContainerID:"894b43e3dc548a07ab30eeca8bab3d67cdf388423ef8ef0ea5c3b88f1acd13b6", Pod:"calico-apiserver-8978b56b9-tpkzf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6d9fff504e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:55.647098 env[1920]: 2025-09-13 00:05:54.967 [INFO][5193] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" Sep 13 00:05:55.647098 env[1920]: 2025-09-13 00:05:54.967 [INFO][5193] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" iface="eth0" netns="" Sep 13 00:05:55.647098 env[1920]: 2025-09-13 00:05:54.967 [INFO][5193] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" Sep 13 00:05:55.647098 env[1920]: 2025-09-13 00:05:54.967 [INFO][5193] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" Sep 13 00:05:55.647098 env[1920]: 2025-09-13 00:05:55.570 [INFO][5229] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" HandleID="k8s-pod-network.0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" Workload="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--tpkzf-eth0" Sep 13 00:05:55.647098 env[1920]: 2025-09-13 00:05:55.571 [INFO][5229] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:55.647098 env[1920]: 2025-09-13 00:05:55.571 [INFO][5229] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:55.647098 env[1920]: 2025-09-13 00:05:55.628 [WARNING][5229] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" HandleID="k8s-pod-network.0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" Workload="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--tpkzf-eth0" Sep 13 00:05:55.647098 env[1920]: 2025-09-13 00:05:55.628 [INFO][5229] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" HandleID="k8s-pod-network.0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" Workload="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--tpkzf-eth0" Sep 13 00:05:55.647098 env[1920]: 2025-09-13 00:05:55.635 [INFO][5229] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:55.647098 env[1920]: 2025-09-13 00:05:55.644 [INFO][5193] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" Sep 13 00:05:55.650123 env[1920]: time="2025-09-13T00:05:55.647136442Z" level=info msg="TearDown network for sandbox \"0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0\" successfully" Sep 13 00:05:55.650123 env[1920]: time="2025-09-13T00:05:55.647182562Z" level=info msg="StopPodSandbox for \"0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0\" returns successfully" Sep 13 00:05:55.650123 env[1920]: time="2025-09-13T00:05:55.649738068Z" level=info msg="RemovePodSandbox for \"0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0\"" Sep 13 00:05:55.650123 env[1920]: time="2025-09-13T00:05:55.649851335Z" level=info msg="Forcibly stopping sandbox \"0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0\"" Sep 13 00:05:55.712187 systemd[1]: run-containerd-runc-k8s.io-5a3336a46a17f3b65ca66b0f81401f8f5c199a9311e56535c600ff6a80093d45-runc.Bo8NAs.mount: Deactivated successfully. Sep 13 00:05:55.873935 env[1920]: time="2025-09-13T00:05:55.872394584Z" level=info msg="StartContainer for \"1b9eee130186f7e3b0c71ed99a66a91ff9f3ef58f8c7a8ad38333d54b6599c05\" returns successfully" Sep 13 00:05:55.913373 env[1920]: time="2025-09-13T00:05:55.910887824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-6v2qn,Uid:dce9497b-6716-4542-9084-aaea79149a75,Namespace:calico-system,Attempt:1,} returns sandbox id \"5a3336a46a17f3b65ca66b0f81401f8f5c199a9311e56535c600ff6a80093d45\"" Sep 13 00:05:55.994385 systemd-networkd[1599]: cali166f7934ca5: Link UP Sep 13 00:05:56.000042 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali166f7934ca5: link becomes ready Sep 13 00:05:56.000920 systemd-networkd[1599]: cali166f7934ca5: Gained carrier Sep 13 00:05:56.047915 env[1920]: 2025-09-13 00:05:55.599 [INFO][5215] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--1-k8s-calico--kube--controllers--6b97f6f77f--zv7t8-eth0 calico-kube-controllers-6b97f6f77f- calico-system 5015b2dd-c6f7-46e4-9af1-65597994d2b9 1050 0 2025-09-13 00:05:24 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6b97f6f77f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-29-1 calico-kube-controllers-6b97f6f77f-zv7t8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali166f7934ca5 [] [] }} ContainerID="597b77902a13bf16c6747710c077f79062a77c0536217fbf7e305e81f56c3468" Namespace="calico-system" Pod="calico-kube-controllers-6b97f6f77f-zv7t8" WorkloadEndpoint="ip--172--31--29--1-k8s-calico--kube--controllers--6b97f6f77f--zv7t8-" Sep 13 00:05:56.047915 env[1920]: 2025-09-13 00:05:55.600 [INFO][5215] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="597b77902a13bf16c6747710c077f79062a77c0536217fbf7e305e81f56c3468" Namespace="calico-system" Pod="calico-kube-controllers-6b97f6f77f-zv7t8" WorkloadEndpoint="ip--172--31--29--1-k8s-calico--kube--controllers--6b97f6f77f--zv7t8-eth0" Sep 13 00:05:56.047915 env[1920]: 2025-09-13 00:05:55.846 [INFO][5327] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="597b77902a13bf16c6747710c077f79062a77c0536217fbf7e305e81f56c3468" HandleID="k8s-pod-network.597b77902a13bf16c6747710c077f79062a77c0536217fbf7e305e81f56c3468" Workload="ip--172--31--29--1-k8s-calico--kube--controllers--6b97f6f77f--zv7t8-eth0" Sep 13 00:05:56.047915 env[1920]: 2025-09-13 00:05:55.847 [INFO][5327] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="597b77902a13bf16c6747710c077f79062a77c0536217fbf7e305e81f56c3468" HandleID="k8s-pod-network.597b77902a13bf16c6747710c077f79062a77c0536217fbf7e305e81f56c3468" Workload="ip--172--31--29--1-k8s-calico--kube--controllers--6b97f6f77f--zv7t8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000201340), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-1", "pod":"calico-kube-controllers-6b97f6f77f-zv7t8", "timestamp":"2025-09-13 00:05:55.84551034 +0000 UTC"}, Hostname:"ip-172-31-29-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:05:56.047915 env[1920]: 2025-09-13 00:05:55.847 [INFO][5327] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:56.047915 env[1920]: 2025-09-13 00:05:55.848 [INFO][5327] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:56.047915 env[1920]: 2025-09-13 00:05:55.848 [INFO][5327] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-1' Sep 13 00:05:56.047915 env[1920]: 2025-09-13 00:05:55.870 [INFO][5327] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.597b77902a13bf16c6747710c077f79062a77c0536217fbf7e305e81f56c3468" host="ip-172-31-29-1" Sep 13 00:05:56.047915 env[1920]: 2025-09-13 00:05:55.909 [INFO][5327] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-1" Sep 13 00:05:56.047915 env[1920]: 2025-09-13 00:05:55.934 [INFO][5327] ipam/ipam.go 511: Trying affinity for 192.168.50.64/26 host="ip-172-31-29-1" Sep 13 00:05:56.047915 env[1920]: 2025-09-13 00:05:55.939 [INFO][5327] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.64/26 host="ip-172-31-29-1" Sep 13 00:05:56.047915 env[1920]: 2025-09-13 00:05:55.944 [INFO][5327] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.64/26 host="ip-172-31-29-1" Sep 13 00:05:56.047915 env[1920]: 2025-09-13 00:05:55.944 [INFO][5327] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.64/26 handle="k8s-pod-network.597b77902a13bf16c6747710c077f79062a77c0536217fbf7e305e81f56c3468" host="ip-172-31-29-1" Sep 13 00:05:56.047915 env[1920]: 2025-09-13 00:05:55.951 [INFO][5327] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.597b77902a13bf16c6747710c077f79062a77c0536217fbf7e305e81f56c3468 Sep 13 00:05:56.047915 env[1920]: 2025-09-13 00:05:55.959 [INFO][5327] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.64/26 handle="k8s-pod-network.597b77902a13bf16c6747710c077f79062a77c0536217fbf7e305e81f56c3468" host="ip-172-31-29-1" Sep 13 00:05:56.047915 env[1920]: 2025-09-13 00:05:55.975 [INFO][5327] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.72/26] block=192.168.50.64/26 handle="k8s-pod-network.597b77902a13bf16c6747710c077f79062a77c0536217fbf7e305e81f56c3468" host="ip-172-31-29-1" Sep 13 00:05:56.047915 env[1920]: 2025-09-13 00:05:55.975 [INFO][5327] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.72/26] handle="k8s-pod-network.597b77902a13bf16c6747710c077f79062a77c0536217fbf7e305e81f56c3468" host="ip-172-31-29-1" Sep 13 00:05:56.047915 env[1920]: 2025-09-13 00:05:55.975 [INFO][5327] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:56.047915 env[1920]: 2025-09-13 00:05:55.975 [INFO][5327] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.72/26] IPv6=[] ContainerID="597b77902a13bf16c6747710c077f79062a77c0536217fbf7e305e81f56c3468" HandleID="k8s-pod-network.597b77902a13bf16c6747710c077f79062a77c0536217fbf7e305e81f56c3468" Workload="ip--172--31--29--1-k8s-calico--kube--controllers--6b97f6f77f--zv7t8-eth0" Sep 13 00:05:56.051540 env[1920]: 2025-09-13 00:05:55.979 [INFO][5215] cni-plugin/k8s.go 418: Populated endpoint ContainerID="597b77902a13bf16c6747710c077f79062a77c0536217fbf7e305e81f56c3468" Namespace="calico-system" Pod="calico-kube-controllers-6b97f6f77f-zv7t8" WorkloadEndpoint="ip--172--31--29--1-k8s-calico--kube--controllers--6b97f6f77f--zv7t8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--1-k8s-calico--kube--controllers--6b97f6f77f--zv7t8-eth0", GenerateName:"calico-kube-controllers-6b97f6f77f-", Namespace:"calico-system", SelfLink:"", UID:"5015b2dd-c6f7-46e4-9af1-65597994d2b9", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b97f6f77f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-1", ContainerID:"", Pod:"calico-kube-controllers-6b97f6f77f-zv7t8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali166f7934ca5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:56.051540 env[1920]: 2025-09-13 00:05:55.979 [INFO][5215] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.72/32] ContainerID="597b77902a13bf16c6747710c077f79062a77c0536217fbf7e305e81f56c3468" Namespace="calico-system" Pod="calico-kube-controllers-6b97f6f77f-zv7t8" WorkloadEndpoint="ip--172--31--29--1-k8s-calico--kube--controllers--6b97f6f77f--zv7t8-eth0" Sep 13 00:05:56.051540 env[1920]: 2025-09-13 00:05:55.979 [INFO][5215] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali166f7934ca5 ContainerID="597b77902a13bf16c6747710c077f79062a77c0536217fbf7e305e81f56c3468" Namespace="calico-system" Pod="calico-kube-controllers-6b97f6f77f-zv7t8" WorkloadEndpoint="ip--172--31--29--1-k8s-calico--kube--controllers--6b97f6f77f--zv7t8-eth0" Sep 13 00:05:56.051540 env[1920]: 2025-09-13 00:05:56.009 [INFO][5215] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="597b77902a13bf16c6747710c077f79062a77c0536217fbf7e305e81f56c3468" Namespace="calico-system" Pod="calico-kube-controllers-6b97f6f77f-zv7t8" WorkloadEndpoint="ip--172--31--29--1-k8s-calico--kube--controllers--6b97f6f77f--zv7t8-eth0" Sep 13 00:05:56.051540 env[1920]: 2025-09-13 00:05:56.011 [INFO][5215] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="597b77902a13bf16c6747710c077f79062a77c0536217fbf7e305e81f56c3468" Namespace="calico-system" Pod="calico-kube-controllers-6b97f6f77f-zv7t8" WorkloadEndpoint="ip--172--31--29--1-k8s-calico--kube--controllers--6b97f6f77f--zv7t8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--1-k8s-calico--kube--controllers--6b97f6f77f--zv7t8-eth0", GenerateName:"calico-kube-controllers-6b97f6f77f-", Namespace:"calico-system", SelfLink:"", UID:"5015b2dd-c6f7-46e4-9af1-65597994d2b9", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b97f6f77f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-1", ContainerID:"597b77902a13bf16c6747710c077f79062a77c0536217fbf7e305e81f56c3468", Pod:"calico-kube-controllers-6b97f6f77f-zv7t8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali166f7934ca5", MAC:"7a:ee:87:f2:d1:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:56.051540 env[1920]: 2025-09-13 00:05:56.032 [INFO][5215] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="597b77902a13bf16c6747710c077f79062a77c0536217fbf7e305e81f56c3468" Namespace="calico-system" Pod="calico-kube-controllers-6b97f6f77f-zv7t8" WorkloadEndpoint="ip--172--31--29--1-k8s-calico--kube--controllers--6b97f6f77f--zv7t8-eth0" Sep 13 00:05:56.109000 audit[5391]: NETFILTER_CFG table=filter:111 family=2 entries=56 op=nft_register_chain pid=5391 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:05:56.109000 audit[5391]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25500 a0=3 a1=ffffd09b7660 a2=0 a3=ffff9fd63fa8 items=0 ppid=4415 pid=5391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:56.134677 kernel: audit: type=1325 audit(1757721956.109:419): table=filter:111 family=2 entries=56 op=nft_register_chain pid=5391 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:05:56.134935 kernel: audit: type=1300 audit(1757721956.109:419): arch=c00000b7 syscall=211 success=yes exit=25500 a0=3 a1=ffffd09b7660 a2=0 a3=ffff9fd63fa8 items=0 ppid=4415 pid=5391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:56.109000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:05:56.149727 kernel: audit: type=1327 audit(1757721956.109:419): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:05:56.158514 env[1920]: 2025-09-13 00:05:55.961 [WARNING][5347] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--tpkzf-eth0", GenerateName:"calico-apiserver-8978b56b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"d0a25691-16ce-4727-a17e-3adde345074c", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8978b56b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-1", ContainerID:"894b43e3dc548a07ab30eeca8bab3d67cdf388423ef8ef0ea5c3b88f1acd13b6", Pod:"calico-apiserver-8978b56b9-tpkzf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6d9fff504e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:56.158514 env[1920]: 2025-09-13 00:05:55.962 [INFO][5347] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" Sep 13 00:05:56.158514 env[1920]: 2025-09-13 00:05:55.962 [INFO][5347] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" iface="eth0" netns="" Sep 13 00:05:56.158514 env[1920]: 2025-09-13 00:05:55.962 [INFO][5347] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" Sep 13 00:05:56.158514 env[1920]: 2025-09-13 00:05:55.962 [INFO][5347] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" Sep 13 00:05:56.158514 env[1920]: 2025-09-13 00:05:56.112 [INFO][5379] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" HandleID="k8s-pod-network.0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" Workload="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--tpkzf-eth0" Sep 13 00:05:56.158514 env[1920]: 2025-09-13 00:05:56.112 [INFO][5379] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:56.158514 env[1920]: 2025-09-13 00:05:56.112 [INFO][5379] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:56.158514 env[1920]: 2025-09-13 00:05:56.139 [WARNING][5379] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" HandleID="k8s-pod-network.0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" Workload="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--tpkzf-eth0" Sep 13 00:05:56.158514 env[1920]: 2025-09-13 00:05:56.140 [INFO][5379] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" HandleID="k8s-pod-network.0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" Workload="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--tpkzf-eth0" Sep 13 00:05:56.158514 env[1920]: 2025-09-13 00:05:56.151 [INFO][5379] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:56.158514 env[1920]: 2025-09-13 00:05:56.154 [INFO][5347] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0" Sep 13 00:05:56.159461 env[1920]: time="2025-09-13T00:05:56.158541868Z" level=info msg="TearDown network for sandbox \"0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0\" successfully" Sep 13 00:05:56.176847 env[1920]: time="2025-09-13T00:05:56.176652644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:56.177025 env[1920]: time="2025-09-13T00:05:56.176838338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:56.177025 env[1920]: time="2025-09-13T00:05:56.176927266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:56.177618 env[1920]: time="2025-09-13T00:05:56.177503597Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/597b77902a13bf16c6747710c077f79062a77c0536217fbf7e305e81f56c3468 pid=5400 runtime=io.containerd.runc.v2 Sep 13 00:05:56.196065 env[1920]: time="2025-09-13T00:05:56.195960822Z" level=info msg="RemovePodSandbox \"0706492126a24553e09367706362a1d21362340b7198a0a57d3757af349fbde0\" returns successfully" Sep 13 00:05:56.197027 env[1920]: time="2025-09-13T00:05:56.196969086Z" level=info msg="StopPodSandbox for \"8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85\"" Sep 13 00:05:56.359066 kubelet[3053]: I0913 00:05:56.352687 3053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-fkdwg" podStartSLOduration=58.352661244 podStartE2EDuration="58.352661244s" podCreationTimestamp="2025-09-13 00:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:05:56.326305625 +0000 UTC m=+62.043605830" watchObservedRunningTime="2025-09-13 00:05:56.352661244 +0000 UTC m=+62.069961401" Sep 13 00:05:56.429000 audit[5448]: NETFILTER_CFG table=filter:112 family=2 entries=14 op=nft_register_rule pid=5448 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:05:56.429000 audit[5448]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd25d4570 a2=0 a3=1 items=0 ppid=3158 pid=5448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:56.429000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:05:56.441372 kernel: audit: type=1325 audit(1757721956.429:420): table=filter:112 family=2 entries=14 op=nft_register_rule pid=5448 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:05:56.440000 audit[5448]: NETFILTER_CFG table=nat:113 family=2 entries=44 op=nft_register_rule pid=5448 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:05:56.440000 audit[5448]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffd25d4570 a2=0 a3=1 items=0 ppid=3158 pid=5448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:56.440000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:05:56.528327 env[1920]: time="2025-09-13T00:05:56.528154120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b97f6f77f-zv7t8,Uid:5015b2dd-c6f7-46e4-9af1-65597994d2b9,Namespace:calico-system,Attempt:1,} returns sandbox id \"597b77902a13bf16c6747710c077f79062a77c0536217fbf7e305e81f56c3468\"" Sep 13 00:05:56.559000 audit[5457]: NETFILTER_CFG table=filter:114 family=2 entries=14 op=nft_register_rule pid=5457 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:05:56.559000 audit[5457]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe22c5450 a2=0 a3=1 items=0 ppid=3158 pid=5457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:56.559000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:05:56.573184 env[1920]: 2025-09-13 00:05:56.320 [WARNING][5429] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--1-k8s-csi--node--driver--cdkj7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b0810e85-a772-4d04-a431-1cf9fa4bc8f8", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-1", ContainerID:"20f725260c00a448e75c55331e52e41c73477b7f310b154ae308b79c186f2e4a", Pod:"csi-node-driver-cdkj7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali46578ccb397", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:56.573184 env[1920]: 2025-09-13 00:05:56.321 [INFO][5429] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" Sep 13 00:05:56.573184 env[1920]: 2025-09-13 00:05:56.321 [INFO][5429] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" iface="eth0" netns="" Sep 13 00:05:56.573184 env[1920]: 2025-09-13 00:05:56.321 [INFO][5429] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" Sep 13 00:05:56.573184 env[1920]: 2025-09-13 00:05:56.321 [INFO][5429] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" Sep 13 00:05:56.573184 env[1920]: 2025-09-13 00:05:56.528 [INFO][5443] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" HandleID="k8s-pod-network.8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" Workload="ip--172--31--29--1-k8s-csi--node--driver--cdkj7-eth0" Sep 13 00:05:56.573184 env[1920]: 2025-09-13 00:05:56.528 [INFO][5443] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:56.573184 env[1920]: 2025-09-13 00:05:56.529 [INFO][5443] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:56.573184 env[1920]: 2025-09-13 00:05:56.558 [WARNING][5443] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" HandleID="k8s-pod-network.8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" Workload="ip--172--31--29--1-k8s-csi--node--driver--cdkj7-eth0" Sep 13 00:05:56.573184 env[1920]: 2025-09-13 00:05:56.558 [INFO][5443] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" HandleID="k8s-pod-network.8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" Workload="ip--172--31--29--1-k8s-csi--node--driver--cdkj7-eth0" Sep 13 00:05:56.573184 env[1920]: 2025-09-13 00:05:56.563 [INFO][5443] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:56.573184 env[1920]: 2025-09-13 00:05:56.569 [INFO][5429] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" Sep 13 00:05:56.575378 env[1920]: time="2025-09-13T00:05:56.573226045Z" level=info msg="TearDown network for sandbox \"8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85\" successfully" Sep 13 00:05:56.575378 env[1920]: time="2025-09-13T00:05:56.573272550Z" level=info msg="StopPodSandbox for \"8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85\" returns successfully" Sep 13 00:05:56.575378 env[1920]: time="2025-09-13T00:05:56.575266503Z" level=info msg="RemovePodSandbox for \"8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85\"" Sep 13 00:05:56.575378 env[1920]: time="2025-09-13T00:05:56.575327613Z" level=info msg="Forcibly stopping sandbox \"8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85\"" Sep 13 00:05:56.590000 audit[5457]: NETFILTER_CFG table=nat:115 family=2 entries=56 op=nft_register_chain pid=5457 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:05:56.590000 audit[5457]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffe22c5450 a2=0 a3=1 items=0 ppid=3158 pid=5457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:56.590000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:05:56.778543 systemd-networkd[1599]: calidb8036257aa: Gained IPv6LL Sep 13 00:05:56.793974 env[1920]: 2025-09-13 00:05:56.705 [WARNING][5469] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--1-k8s-csi--node--driver--cdkj7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b0810e85-a772-4d04-a431-1cf9fa4bc8f8", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-1", ContainerID:"20f725260c00a448e75c55331e52e41c73477b7f310b154ae308b79c186f2e4a", Pod:"csi-node-driver-cdkj7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali46578ccb397", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:56.793974 env[1920]: 2025-09-13 00:05:56.705 [INFO][5469] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" Sep 13 00:05:56.793974 env[1920]: 2025-09-13 00:05:56.705 [INFO][5469] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" iface="eth0" netns="" Sep 13 00:05:56.793974 env[1920]: 2025-09-13 00:05:56.705 [INFO][5469] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" Sep 13 00:05:56.793974 env[1920]: 2025-09-13 00:05:56.705 [INFO][5469] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" Sep 13 00:05:56.793974 env[1920]: 2025-09-13 00:05:56.761 [INFO][5476] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" HandleID="k8s-pod-network.8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" Workload="ip--172--31--29--1-k8s-csi--node--driver--cdkj7-eth0" Sep 13 00:05:56.793974 env[1920]: 2025-09-13 00:05:56.762 [INFO][5476] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:56.793974 env[1920]: 2025-09-13 00:05:56.762 [INFO][5476] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:56.793974 env[1920]: 2025-09-13 00:05:56.782 [WARNING][5476] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" HandleID="k8s-pod-network.8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" Workload="ip--172--31--29--1-k8s-csi--node--driver--cdkj7-eth0" Sep 13 00:05:56.793974 env[1920]: 2025-09-13 00:05:56.782 [INFO][5476] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" HandleID="k8s-pod-network.8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" Workload="ip--172--31--29--1-k8s-csi--node--driver--cdkj7-eth0" Sep 13 00:05:56.793974 env[1920]: 2025-09-13 00:05:56.785 [INFO][5476] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:56.793974 env[1920]: 2025-09-13 00:05:56.791 [INFO][5469] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85" Sep 13 00:05:56.795139 env[1920]: time="2025-09-13T00:05:56.794015040Z" level=info msg="TearDown network for sandbox \"8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85\" successfully" Sep 13 00:05:56.804155 env[1920]: time="2025-09-13T00:05:56.804078248Z" level=info msg="RemovePodSandbox \"8cddd99214ca60fd9a43f53d42e9c03b2ea8b5f9ab15dea5a01b7e78cc50ab85\" returns successfully" Sep 13 00:05:56.805323 env[1920]: time="2025-09-13T00:05:56.805267237Z" level=info msg="StopPodSandbox for \"7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec\"" Sep 13 00:05:57.026002 env[1920]: 2025-09-13 00:05:56.924 [WARNING][5490] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--mstsf-eth0", GenerateName:"calico-apiserver-8978b56b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"527909b5-007b-4b06-af5d-92e50eb61126", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8978b56b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-1", ContainerID:"c6597bd5b30b19a93626ba95673fd81ae6461643bedc3c601682b99b37ebff2a", Pod:"calico-apiserver-8978b56b9-mstsf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali39934a37d19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:57.026002 env[1920]: 2025-09-13 00:05:56.924 [INFO][5490] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" Sep 13 00:05:57.026002 env[1920]: 2025-09-13 00:05:56.924 [INFO][5490] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" iface="eth0" netns="" Sep 13 00:05:57.026002 env[1920]: 2025-09-13 00:05:56.924 [INFO][5490] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" Sep 13 00:05:57.026002 env[1920]: 2025-09-13 00:05:56.924 [INFO][5490] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" Sep 13 00:05:57.026002 env[1920]: 2025-09-13 00:05:56.991 [INFO][5497] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" HandleID="k8s-pod-network.7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" Workload="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--mstsf-eth0" Sep 13 00:05:57.026002 env[1920]: 2025-09-13 00:05:56.991 [INFO][5497] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:57.026002 env[1920]: 2025-09-13 00:05:56.991 [INFO][5497] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:57.026002 env[1920]: 2025-09-13 00:05:57.016 [WARNING][5497] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" HandleID="k8s-pod-network.7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" Workload="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--mstsf-eth0" Sep 13 00:05:57.026002 env[1920]: 2025-09-13 00:05:57.016 [INFO][5497] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" HandleID="k8s-pod-network.7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" Workload="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--mstsf-eth0" Sep 13 00:05:57.026002 env[1920]: 2025-09-13 00:05:57.019 [INFO][5497] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:57.026002 env[1920]: 2025-09-13 00:05:57.022 [INFO][5490] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" Sep 13 00:05:57.027234 env[1920]: time="2025-09-13T00:05:57.026050995Z" level=info msg="TearDown network for sandbox \"7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec\" successfully" Sep 13 00:05:57.027234 env[1920]: time="2025-09-13T00:05:57.026097019Z" level=info msg="StopPodSandbox for \"7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec\" returns successfully" Sep 13 00:05:57.029590 env[1920]: time="2025-09-13T00:05:57.029527609Z" level=info msg="RemovePodSandbox for \"7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec\"" Sep 13 00:05:57.029903 env[1920]: time="2025-09-13T00:05:57.029595151Z" level=info msg="Forcibly stopping sandbox \"7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec\"" Sep 13 00:05:57.162010 systemd-networkd[1599]: cali166f7934ca5: Gained IPv6LL Sep 13 00:05:57.226278 env[1920]: 2025-09-13 00:05:57.111 [WARNING][5512] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--mstsf-eth0", GenerateName:"calico-apiserver-8978b56b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"527909b5-007b-4b06-af5d-92e50eb61126", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8978b56b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-1", ContainerID:"c6597bd5b30b19a93626ba95673fd81ae6461643bedc3c601682b99b37ebff2a", Pod:"calico-apiserver-8978b56b9-mstsf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali39934a37d19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:57.226278 env[1920]: 2025-09-13 00:05:57.111 [INFO][5512] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" Sep 13 00:05:57.226278 env[1920]: 2025-09-13 00:05:57.111 [INFO][5512] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" iface="eth0" netns="" Sep 13 00:05:57.226278 env[1920]: 2025-09-13 00:05:57.111 [INFO][5512] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" Sep 13 00:05:57.226278 env[1920]: 2025-09-13 00:05:57.111 [INFO][5512] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" Sep 13 00:05:57.226278 env[1920]: 2025-09-13 00:05:57.204 [INFO][5519] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" HandleID="k8s-pod-network.7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" Workload="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--mstsf-eth0" Sep 13 00:05:57.226278 env[1920]: 2025-09-13 00:05:57.204 [INFO][5519] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:57.226278 env[1920]: 2025-09-13 00:05:57.204 [INFO][5519] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:57.226278 env[1920]: 2025-09-13 00:05:57.218 [WARNING][5519] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" HandleID="k8s-pod-network.7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" Workload="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--mstsf-eth0" Sep 13 00:05:57.226278 env[1920]: 2025-09-13 00:05:57.218 [INFO][5519] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" HandleID="k8s-pod-network.7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" Workload="ip--172--31--29--1-k8s-calico--apiserver--8978b56b9--mstsf-eth0" Sep 13 00:05:57.226278 env[1920]: 2025-09-13 00:05:57.220 [INFO][5519] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:57.226278 env[1920]: 2025-09-13 00:05:57.223 [INFO][5512] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec" Sep 13 00:05:57.227698 env[1920]: time="2025-09-13T00:05:57.226877257Z" level=info msg="TearDown network for sandbox \"7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec\" successfully" Sep 13 00:05:57.231615 env[1920]: time="2025-09-13T00:05:57.231538248Z" level=info msg="RemovePodSandbox \"7dc6b7f09cbe6e3ccff5917794c14a299c3a6a1abe53be8b1ce57c6e1a4beeec\" returns successfully" Sep 13 00:05:57.232493 env[1920]: time="2025-09-13T00:05:57.232428095Z" level=info msg="StopPodSandbox for \"30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4\"" Sep 13 00:05:57.411359 env[1920]: 2025-09-13 00:05:57.319 [WARNING][5535] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" WorkloadEndpoint="ip--172--31--29--1-k8s-whisker--79ccfb85c9--62fmz-eth0" Sep 13 00:05:57.411359 env[1920]: 2025-09-13 00:05:57.320 [INFO][5535] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" Sep 13 00:05:57.411359 env[1920]: 2025-09-13 00:05:57.320 [INFO][5535] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" iface="eth0" netns="" Sep 13 00:05:57.411359 env[1920]: 2025-09-13 00:05:57.320 [INFO][5535] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" Sep 13 00:05:57.411359 env[1920]: 2025-09-13 00:05:57.320 [INFO][5535] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" Sep 13 00:05:57.411359 env[1920]: 2025-09-13 00:05:57.388 [INFO][5542] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" HandleID="k8s-pod-network.30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" Workload="ip--172--31--29--1-k8s-whisker--79ccfb85c9--62fmz-eth0" Sep 13 00:05:57.411359 env[1920]: 2025-09-13 00:05:57.388 [INFO][5542] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:57.411359 env[1920]: 2025-09-13 00:05:57.388 [INFO][5542] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:57.411359 env[1920]: 2025-09-13 00:05:57.402 [WARNING][5542] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" HandleID="k8s-pod-network.30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" Workload="ip--172--31--29--1-k8s-whisker--79ccfb85c9--62fmz-eth0" Sep 13 00:05:57.411359 env[1920]: 2025-09-13 00:05:57.403 [INFO][5542] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" HandleID="k8s-pod-network.30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" Workload="ip--172--31--29--1-k8s-whisker--79ccfb85c9--62fmz-eth0" Sep 13 00:05:57.411359 env[1920]: 2025-09-13 00:05:57.405 [INFO][5542] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:57.411359 env[1920]: 2025-09-13 00:05:57.408 [INFO][5535] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" Sep 13 00:05:57.412450 env[1920]: time="2025-09-13T00:05:57.412394363Z" level=info msg="TearDown network for sandbox \"30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4\" successfully" Sep 13 00:05:57.412604 env[1920]: time="2025-09-13T00:05:57.412569843Z" level=info msg="StopPodSandbox for \"30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4\" returns successfully" Sep 13 00:05:57.415143 env[1920]: time="2025-09-13T00:05:57.413465618Z" level=info msg="RemovePodSandbox for \"30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4\"" Sep 13 00:05:57.415143 env[1920]: time="2025-09-13T00:05:57.413535021Z" level=info msg="Forcibly stopping sandbox \"30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4\"" Sep 13 00:05:57.598901 env[1920]: 2025-09-13 00:05:57.503 [WARNING][5556] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" WorkloadEndpoint="ip--172--31--29--1-k8s-whisker--79ccfb85c9--62fmz-eth0" Sep 13 00:05:57.598901 env[1920]: 2025-09-13 00:05:57.503 [INFO][5556] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" Sep 13 00:05:57.598901 env[1920]: 2025-09-13 00:05:57.503 [INFO][5556] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" iface="eth0" netns="" Sep 13 00:05:57.598901 env[1920]: 2025-09-13 00:05:57.503 [INFO][5556] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" Sep 13 00:05:57.598901 env[1920]: 2025-09-13 00:05:57.503 [INFO][5556] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" Sep 13 00:05:57.598901 env[1920]: 2025-09-13 00:05:57.563 [INFO][5563] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" HandleID="k8s-pod-network.30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" Workload="ip--172--31--29--1-k8s-whisker--79ccfb85c9--62fmz-eth0" Sep 13 00:05:57.598901 env[1920]: 2025-09-13 00:05:57.564 [INFO][5563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:57.598901 env[1920]: 2025-09-13 00:05:57.564 [INFO][5563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:57.598901 env[1920]: 2025-09-13 00:05:57.585 [WARNING][5563] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" HandleID="k8s-pod-network.30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" Workload="ip--172--31--29--1-k8s-whisker--79ccfb85c9--62fmz-eth0" Sep 13 00:05:57.598901 env[1920]: 2025-09-13 00:05:57.585 [INFO][5563] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" HandleID="k8s-pod-network.30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" Workload="ip--172--31--29--1-k8s-whisker--79ccfb85c9--62fmz-eth0" Sep 13 00:05:57.598901 env[1920]: 2025-09-13 00:05:57.588 [INFO][5563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:57.598901 env[1920]: 2025-09-13 00:05:57.592 [INFO][5556] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4" Sep 13 00:05:57.599865 env[1920]: time="2025-09-13T00:05:57.599806649Z" level=info msg="TearDown network for sandbox \"30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4\" successfully" Sep 13 00:05:57.604519 env[1920]: time="2025-09-13T00:05:57.604459539Z" level=info msg="RemovePodSandbox \"30ddec15f59c7bc059ed76a50fff88586dde6b553bed7ebf22da982b8aa474b4\" returns successfully" Sep 13 00:05:57.605941 env[1920]: time="2025-09-13T00:05:57.605884119Z" level=info msg="StopPodSandbox for \"9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323\"" Sep 13 00:05:57.780541 env[1920]: 2025-09-13 00:05:57.692 [WARNING][5578] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--1-k8s-coredns--7c65d6cfc9--frxhj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"e3f25ad3-33b2-4d46-a48c-0b632c9b8329", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 4, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-1", ContainerID:"54232dc65e0fbb11a984179c1a2c1ae8fe68c3ee2560e395b3760a4a07d24f27", Pod:"coredns-7c65d6cfc9-frxhj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35ddc4d6510", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:57.780541 env[1920]: 2025-09-13 00:05:57.693 [INFO][5578] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" Sep 13 00:05:57.780541 env[1920]: 2025-09-13 00:05:57.693 [INFO][5578] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" iface="eth0" netns="" Sep 13 00:05:57.780541 env[1920]: 2025-09-13 00:05:57.693 [INFO][5578] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" Sep 13 00:05:57.780541 env[1920]: 2025-09-13 00:05:57.693 [INFO][5578] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" Sep 13 00:05:57.780541 env[1920]: 2025-09-13 00:05:57.747 [INFO][5585] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" HandleID="k8s-pod-network.9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" Workload="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--frxhj-eth0" Sep 13 00:05:57.780541 env[1920]: 2025-09-13 00:05:57.747 [INFO][5585] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:57.780541 env[1920]: 2025-09-13 00:05:57.748 [INFO][5585] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:57.780541 env[1920]: 2025-09-13 00:05:57.763 [WARNING][5585] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" HandleID="k8s-pod-network.9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" Workload="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--frxhj-eth0" Sep 13 00:05:57.780541 env[1920]: 2025-09-13 00:05:57.764 [INFO][5585] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" HandleID="k8s-pod-network.9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" Workload="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--frxhj-eth0" Sep 13 00:05:57.780541 env[1920]: 2025-09-13 00:05:57.768 [INFO][5585] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:57.780541 env[1920]: 2025-09-13 00:05:57.771 [INFO][5578] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" Sep 13 00:05:57.780541 env[1920]: time="2025-09-13T00:05:57.779493111Z" level=info msg="TearDown network for sandbox \"9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323\" successfully" Sep 13 00:05:57.780541 env[1920]: time="2025-09-13T00:05:57.779542388Z" level=info msg="StopPodSandbox for \"9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323\" returns successfully" Sep 13 00:05:57.782086 env[1920]: time="2025-09-13T00:05:57.782019801Z" level=info msg="RemovePodSandbox for \"9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323\"" Sep 13 00:05:57.782449 env[1920]: time="2025-09-13T00:05:57.782364437Z" level=info msg="Forcibly stopping sandbox \"9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323\"" Sep 13 00:05:58.004585 env[1920]: 2025-09-13 00:05:57.911 [WARNING][5600] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--1-k8s-coredns--7c65d6cfc9--frxhj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"e3f25ad3-33b2-4d46-a48c-0b632c9b8329", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 4, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-1", ContainerID:"54232dc65e0fbb11a984179c1a2c1ae8fe68c3ee2560e395b3760a4a07d24f27", Pod:"coredns-7c65d6cfc9-frxhj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35ddc4d6510", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:05:58.004585 env[1920]: 2025-09-13 00:05:57.912 [INFO][5600] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" Sep 13 00:05:58.004585 env[1920]: 2025-09-13 00:05:57.912 [INFO][5600] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" iface="eth0" netns="" Sep 13 00:05:58.004585 env[1920]: 2025-09-13 00:05:57.912 [INFO][5600] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" Sep 13 00:05:58.004585 env[1920]: 2025-09-13 00:05:57.912 [INFO][5600] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" Sep 13 00:05:58.004585 env[1920]: 2025-09-13 00:05:57.982 [INFO][5607] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" HandleID="k8s-pod-network.9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" Workload="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--frxhj-eth0" Sep 13 00:05:58.004585 env[1920]: 2025-09-13 00:05:57.983 [INFO][5607] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:05:58.004585 env[1920]: 2025-09-13 00:05:57.983 [INFO][5607] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:05:58.004585 env[1920]: 2025-09-13 00:05:57.996 [WARNING][5607] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" HandleID="k8s-pod-network.9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" Workload="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--frxhj-eth0" Sep 13 00:05:58.004585 env[1920]: 2025-09-13 00:05:57.996 [INFO][5607] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" HandleID="k8s-pod-network.9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" Workload="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--frxhj-eth0" Sep 13 00:05:58.004585 env[1920]: 2025-09-13 00:05:57.998 [INFO][5607] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:05:58.004585 env[1920]: 2025-09-13 00:05:58.001 [INFO][5600] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323" Sep 13 00:05:58.005911 env[1920]: time="2025-09-13T00:05:58.005855287Z" level=info msg="TearDown network for sandbox \"9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323\" successfully" Sep 13 00:05:58.011226 env[1920]: time="2025-09-13T00:05:58.011160341Z" level=info msg="RemovePodSandbox \"9977923f64dd87ebe42795d603d7eb16b5f0a811bff5154c0ef6892f72a18323\" returns successfully" Sep 13 00:05:59.039729 env[1920]: time="2025-09-13T00:05:59.039656139Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:59.042799 env[1920]: time="2025-09-13T00:05:59.042708211Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:59.047126 env[1920]: time="2025-09-13T00:05:59.045749313Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:59.049285 env[1920]: time="2025-09-13T00:05:59.049218998Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:59.050931 env[1920]: time="2025-09-13T00:05:59.050858389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 13 00:05:59.055222 env[1920]: time="2025-09-13T00:05:59.055156962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:05:59.061747 env[1920]: time="2025-09-13T00:05:59.061654608Z" level=info msg="CreateContainer within sandbox \"894b43e3dc548a07ab30eeca8bab3d67cdf388423ef8ef0ea5c3b88f1acd13b6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:05:59.103271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount106696684.mount: Deactivated successfully. Sep 13 00:05:59.106863 env[1920]: time="2025-09-13T00:05:59.106752787Z" level=info msg="CreateContainer within sandbox \"894b43e3dc548a07ab30eeca8bab3d67cdf388423ef8ef0ea5c3b88f1acd13b6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"647f67272b316bc3403282542889022d55a677227c713f9ac6be7299729bbd53\"" Sep 13 00:05:59.110976 env[1920]: time="2025-09-13T00:05:59.110812804Z" level=info msg="StartContainer for \"647f67272b316bc3403282542889022d55a677227c713f9ac6be7299729bbd53\"" Sep 13 00:05:59.177293 systemd[1]: run-containerd-runc-k8s.io-647f67272b316bc3403282542889022d55a677227c713f9ac6be7299729bbd53-runc.ZMd7K2.mount: Deactivated successfully. Sep 13 00:05:59.300243 env[1920]: time="2025-09-13T00:05:59.299429038Z" level=info msg="StartContainer for \"647f67272b316bc3403282542889022d55a677227c713f9ac6be7299729bbd53\" returns successfully" Sep 13 00:05:59.401168 env[1920]: time="2025-09-13T00:05:59.396656477Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:59.409817 env[1920]: time="2025-09-13T00:05:59.409084071Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:59.417328 env[1920]: time="2025-09-13T00:05:59.417248519Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:59.423636 env[1920]: time="2025-09-13T00:05:59.423541643Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:05:59.425962 env[1920]: time="2025-09-13T00:05:59.425191871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 13 00:05:59.430250 env[1920]: time="2025-09-13T00:05:59.429925074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:05:59.432992 env[1920]: time="2025-09-13T00:05:59.431627232Z" level=info msg="CreateContainer within sandbox \"c6597bd5b30b19a93626ba95673fd81ae6461643bedc3c601682b99b37ebff2a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:05:59.439000 audit[5654]: NETFILTER_CFG table=filter:116 family=2 entries=14 op=nft_register_rule pid=5654 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:05:59.439000 audit[5654]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdfa7e4a0 a2=0 a3=1 items=0 ppid=3158 pid=5654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:59.439000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:05:59.452000 audit[5654]: NETFILTER_CFG table=nat:117 family=2 entries=20 op=nft_register_rule pid=5654 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:05:59.452000 audit[5654]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffdfa7e4a0 a2=0 a3=1 items=0 ppid=3158 pid=5654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:05:59.452000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:05:59.460877 env[1920]: time="2025-09-13T00:05:59.460323212Z" level=info msg="CreateContainer within sandbox \"c6597bd5b30b19a93626ba95673fd81ae6461643bedc3c601682b99b37ebff2a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d60730cc52fa124d653df58bf52732c5587771da0143e66be6834661d3124809\"" Sep 13 00:05:59.471253 env[1920]: time="2025-09-13T00:05:59.471197414Z" level=info msg="StartContainer for \"d60730cc52fa124d653df58bf52732c5587771da0143e66be6834661d3124809\"" Sep 13 00:05:59.651295 env[1920]: time="2025-09-13T00:05:59.651125210Z" level=info msg="StartContainer for \"d60730cc52fa124d653df58bf52732c5587771da0143e66be6834661d3124809\" returns successfully" Sep 13 00:05:59.874379 systemd[1]: Started sshd@8-172.31.29.1:22-139.178.89.65:35472.service. Sep 13 00:05:59.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.29.1:22-139.178.89.65:35472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:05:59.878170 kernel: kauditd_printk_skb: 17 callbacks suppressed Sep 13 00:05:59.878329 kernel: audit: type=1130 audit(1757721959.873:426): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.29.1:22-139.178.89.65:35472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:00.079000 audit[5699]: USER_ACCT pid=5699 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:00.081282 sshd[5699]: Accepted publickey for core from 139.178.89.65 port 35472 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:06:00.095349 sshd[5699]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:00.092000 audit[5699]: CRED_ACQ pid=5699 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:00.112887 kernel: audit: type=1101 audit(1757721960.079:427): pid=5699 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:00.113011 kernel: audit: type=1103 audit(1757721960.092:428): pid=5699 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:00.123463 kernel: audit: type=1006 audit(1757721960.092:429): pid=5699 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Sep 13 00:06:00.124347 systemd-logind[1911]: New session 9 of user core. Sep 13 00:06:00.127565 systemd[1]: Started session-9.scope. Sep 13 00:06:00.092000 audit[5699]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdab1c1e0 a2=3 a3=1 items=0 ppid=1 pid=5699 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:00.140211 kernel: audit: type=1300 audit(1757721960.092:429): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdab1c1e0 a2=3 a3=1 items=0 ppid=1 pid=5699 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:00.092000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:06:00.163490 kernel: audit: type=1327 audit(1757721960.092:429): proctitle=737368643A20636F7265205B707269765D Sep 13 00:06:00.163652 kernel: audit: type=1105 audit(1757721960.154:430): pid=5699 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:00.154000 audit[5699]: USER_START pid=5699 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:00.157000 audit[5702]: CRED_ACQ pid=5702 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:00.184929 kernel: audit: type=1103 audit(1757721960.157:431): pid=5702 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:00.380398 kubelet[3053]: I0913 00:06:00.380162 3053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8978b56b9-tpkzf" podStartSLOduration=41.679192332 podStartE2EDuration="47.380136491s" podCreationTimestamp="2025-09-13 00:05:13 +0000 UTC" firstStartedPulling="2025-09-13 00:05:53.353685513 +0000 UTC m=+59.070985682" lastFinishedPulling="2025-09-13 00:05:59.054629684 +0000 UTC m=+64.771929841" observedRunningTime="2025-09-13 00:05:59.402445232 +0000 UTC m=+65.119745413" watchObservedRunningTime="2025-09-13 00:06:00.380136491 +0000 UTC m=+66.097436648" Sep 13 00:06:00.493000 audit[5712]: NETFILTER_CFG table=filter:118 family=2 entries=14 op=nft_register_rule pid=5712 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:06:00.503824 kernel: audit: type=1325 audit(1757721960.493:432): table=filter:118 family=2 entries=14 op=nft_register_rule pid=5712 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:06:00.493000 audit[5712]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdc331250 a2=0 a3=1 items=0 ppid=3158 pid=5712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:00.493000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:06:00.526807 kernel: audit: type=1300 audit(1757721960.493:432): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdc331250 a2=0 a3=1 items=0 ppid=3158 pid=5712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:00.533000 audit[5712]: NETFILTER_CFG table=nat:119 family=2 entries=20 op=nft_register_rule pid=5712 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:06:00.533000 audit[5712]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffdc331250 a2=0 a3=1 items=0 ppid=3158 pid=5712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:00.533000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:06:00.648178 sshd[5699]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:00.648000 audit[5699]: USER_END pid=5699 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:00.648000 audit[5699]: CRED_DISP pid=5699 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:00.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.29.1:22-139.178.89.65:35472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:00.654456 systemd-logind[1911]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:06:00.654915 systemd[1]: sshd@8-172.31.29.1:22-139.178.89.65:35472.service: Deactivated successfully. Sep 13 00:06:00.657381 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:06:00.659700 systemd-logind[1911]: Removed session 9. Sep 13 00:06:01.238984 env[1920]: time="2025-09-13T00:06:01.238901856Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:01.245159 env[1920]: time="2025-09-13T00:06:01.245084590Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:01.256271 env[1920]: time="2025-09-13T00:06:01.256190398Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:01.261176 env[1920]: time="2025-09-13T00:06:01.261093298Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:01.263896 env[1920]: time="2025-09-13T00:06:01.262615049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 13 00:06:01.271932 env[1920]: time="2025-09-13T00:06:01.271041802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:06:01.272914 env[1920]: time="2025-09-13T00:06:01.272826339Z" level=info msg="CreateContainer within sandbox \"20f725260c00a448e75c55331e52e41c73477b7f310b154ae308b79c186f2e4a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:06:01.333267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4060304386.mount: Deactivated successfully. Sep 13 00:06:01.337377 env[1920]: time="2025-09-13T00:06:01.337289751Z" level=info msg="CreateContainer within sandbox \"20f725260c00a448e75c55331e52e41c73477b7f310b154ae308b79c186f2e4a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"eff2f61723e3a2ed1b70f282b6e9a59e8dc21e1cacb3004cb03fe3c0eefa4f6c\"" Sep 13 00:06:01.339859 env[1920]: time="2025-09-13T00:06:01.338550936Z" level=info msg="StartContainer for \"eff2f61723e3a2ed1b70f282b6e9a59e8dc21e1cacb3004cb03fe3c0eefa4f6c\"" Sep 13 00:06:01.379884 kubelet[3053]: I0913 00:06:01.377650 3053 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:06:01.537509 env[1920]: time="2025-09-13T00:06:01.537432733Z" level=info msg="StartContainer for \"eff2f61723e3a2ed1b70f282b6e9a59e8dc21e1cacb3004cb03fe3c0eefa4f6c\" returns successfully" Sep 13 00:06:03.627782 kubelet[3053]: I0913 00:06:03.627625 3053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8978b56b9-mstsf" podStartSLOduration=45.35716968 podStartE2EDuration="50.62757681s" podCreationTimestamp="2025-09-13 00:05:13 +0000 UTC" firstStartedPulling="2025-09-13 00:05:54.157608661 +0000 UTC m=+59.874908830" lastFinishedPulling="2025-09-13 00:05:59.428015791 +0000 UTC m=+65.145315960" observedRunningTime="2025-09-13 00:06:00.382673856 +0000 UTC m=+66.099974121" watchObservedRunningTime="2025-09-13 00:06:03.62757681 +0000 UTC m=+69.344876991" Sep 13 00:06:03.721000 audit[5754]: NETFILTER_CFG table=filter:120 family=2 entries=13 op=nft_register_rule pid=5754 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:06:03.721000 audit[5754]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffe38c26b0 a2=0 a3=1 items=0 ppid=3158 pid=5754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:03.721000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:06:03.740000 audit[5754]: NETFILTER_CFG table=nat:121 family=2 entries=27 op=nft_register_chain pid=5754 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:06:03.740000 audit[5754]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=9348 a0=3 a1=ffffe38c26b0 a2=0 a3=1 items=0 ppid=3158 pid=5754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:03.740000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:06:04.155214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1503100016.mount: Deactivated successfully. Sep 13 00:06:04.176359 env[1920]: time="2025-09-13T00:06:04.176286023Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:04.181378 env[1920]: time="2025-09-13T00:06:04.181293121Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:04.187920 env[1920]: time="2025-09-13T00:06:04.187845073Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:04.192965 env[1920]: time="2025-09-13T00:06:04.192889553Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:04.195878 env[1920]: time="2025-09-13T00:06:04.194503066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 13 00:06:04.198557 env[1920]: time="2025-09-13T00:06:04.198476343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:06:04.201748 env[1920]: time="2025-09-13T00:06:04.201671049Z" level=info msg="CreateContainer within sandbox \"ff48d1fadd4a3e58e0bdebb1872e609b966d2148fb4b04c0efc894f4d0161838\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:06:04.243654 env[1920]: time="2025-09-13T00:06:04.243568171Z" level=info msg="CreateContainer within sandbox \"ff48d1fadd4a3e58e0bdebb1872e609b966d2148fb4b04c0efc894f4d0161838\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"e05b7e9249cf21359b285844aa50bccbd485a008162c30ea96c9e2367b3a5c52\"" Sep 13 00:06:04.245168 env[1920]: time="2025-09-13T00:06:04.245105021Z" level=info msg="StartContainer for \"e05b7e9249cf21359b285844aa50bccbd485a008162c30ea96c9e2367b3a5c52\"" Sep 13 00:06:04.262885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1326805291.mount: Deactivated successfully. Sep 13 00:06:04.519179 env[1920]: time="2025-09-13T00:06:04.519002564Z" level=info msg="StartContainer for \"e05b7e9249cf21359b285844aa50bccbd485a008162c30ea96c9e2367b3a5c52\" returns successfully" Sep 13 00:06:05.151351 systemd[1]: run-containerd-runc-k8s.io-e05b7e9249cf21359b285844aa50bccbd485a008162c30ea96c9e2367b3a5c52-runc.8r7u1W.mount: Deactivated successfully. Sep 13 00:06:05.485000 audit[5794]: NETFILTER_CFG table=filter:122 family=2 entries=11 op=nft_register_rule pid=5794 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:06:05.488875 kernel: kauditd_printk_skb: 13 callbacks suppressed Sep 13 00:06:05.489096 kernel: audit: type=1325 audit(1757721965.485:439): table=filter:122 family=2 entries=11 op=nft_register_rule pid=5794 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:06:05.485000 audit[5794]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=fffffec52e80 a2=0 a3=1 items=0 ppid=3158 pid=5794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:05.507078 kernel: audit: type=1300 audit(1757721965.485:439): arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=fffffec52e80 a2=0 a3=1 items=0 ppid=3158 pid=5794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:05.485000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:06:05.520280 kernel: audit: type=1327 audit(1757721965.485:439): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:06:05.512000 audit[5794]: NETFILTER_CFG table=nat:123 family=2 entries=29 op=nft_register_chain pid=5794 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:06:05.527376 kernel: audit: type=1325 audit(1757721965.512:440): table=nat:123 family=2 entries=29 op=nft_register_chain pid=5794 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:06:05.512000 audit[5794]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10116 a0=3 a1=fffffec52e80 a2=0 a3=1 items=0 ppid=3158 pid=5794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:05.542892 kernel: audit: type=1300 audit(1757721965.512:440): arch=c00000b7 syscall=211 success=yes exit=10116 a0=3 a1=fffffec52e80 a2=0 a3=1 items=0 ppid=3158 pid=5794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:05.512000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:06:05.550114 kernel: audit: type=1327 audit(1757721965.512:440): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:06:05.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.29.1:22-139.178.89.65:40654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:05.676178 systemd[1]: Started sshd@9-172.31.29.1:22-139.178.89.65:40654.service. Sep 13 00:06:05.688415 kernel: audit: type=1130 audit(1757721965.675:441): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.29.1:22-139.178.89.65:40654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:05.941000 audit[5795]: USER_ACCT pid=5795 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:05.956723 sshd[5795]: Accepted publickey for core from 139.178.89.65 port 40654 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:06:05.970355 kernel: audit: type=1101 audit(1757721965.941:442): pid=5795 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:05.970538 kernel: audit: type=1103 audit(1757721965.957:443): pid=5795 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:05.957000 audit[5795]: CRED_ACQ pid=5795 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:05.971247 sshd[5795]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:05.982952 kernel: audit: type=1006 audit(1757721965.957:444): pid=5795 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Sep 13 00:06:05.957000 audit[5795]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff2c9eba0 a2=3 a3=1 items=0 ppid=1 pid=5795 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:05.957000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:06:05.986594 systemd[1]: Started session-10.scope. Sep 13 00:06:05.989945 systemd-logind[1911]: New session 10 of user core. Sep 13 00:06:06.003000 audit[5795]: USER_START pid=5795 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:06.008000 audit[5798]: CRED_ACQ pid=5798 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:06.334944 sshd[5795]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:06.336000 audit[5795]: USER_END pid=5795 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:06.336000 audit[5795]: CRED_DISP pid=5795 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:06.341539 systemd[1]: sshd@9-172.31.29.1:22-139.178.89.65:40654.service: Deactivated successfully. Sep 13 00:06:06.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.29.1:22-139.178.89.65:40654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:06.343749 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:06:06.350084 systemd-logind[1911]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:06:06.355017 systemd-logind[1911]: Removed session 10. Sep 13 00:06:06.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.29.1:22-139.178.89.65:40660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:06.362699 systemd[1]: Started sshd@10-172.31.29.1:22-139.178.89.65:40660.service. Sep 13 00:06:06.542000 audit[5809]: USER_ACCT pid=5809 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:06.543215 sshd[5809]: Accepted publickey for core from 139.178.89.65 port 40660 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:06:06.544000 audit[5809]: CRED_ACQ pid=5809 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:06.545000 audit[5809]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff0f38ff0 a2=3 a3=1 items=0 ppid=1 pid=5809 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:06.545000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:06:06.547135 sshd[5809]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:06.558080 systemd[1]: Started session-11.scope. Sep 13 00:06:06.559538 systemd-logind[1911]: New session 11 of user core. Sep 13 00:06:06.584000 audit[5809]: USER_START pid=5809 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:06.588000 audit[5814]: CRED_ACQ pid=5814 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:06.970148 sshd[5809]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:06.977000 audit[5809]: USER_END pid=5809 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:06.978000 audit[5809]: CRED_DISP pid=5809 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:06.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.29.1:22-139.178.89.65:40660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:06.981948 systemd[1]: sshd@10-172.31.29.1:22-139.178.89.65:40660.service: Deactivated successfully. Sep 13 00:06:06.983630 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:06:07.014212 systemd[1]: Started sshd@11-172.31.29.1:22-139.178.89.65:40672.service. Sep 13 00:06:07.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.29.1:22-139.178.89.65:40672 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:07.017478 systemd-logind[1911]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:06:07.038371 systemd-logind[1911]: Removed session 11. Sep 13 00:06:07.073913 systemd[1]: run-containerd-runc-k8s.io-dad4fff024c4f5fee0310844c06f44418179385c69ed85739d4c8dcc2f307956-runc.PXAssj.mount: Deactivated successfully. Sep 13 00:06:07.251000 audit[5828]: USER_ACCT pid=5828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:07.253106 sshd[5828]: Accepted publickey for core from 139.178.89.65 port 40672 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:06:07.254000 audit[5828]: CRED_ACQ pid=5828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:07.255000 audit[5828]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc9629620 a2=3 a3=1 items=0 ppid=1 pid=5828 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:07.255000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:06:07.257977 sshd[5828]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:07.269113 systemd-logind[1911]: New session 12 of user core. Sep 13 00:06:07.270089 systemd[1]: Started session-12.scope. Sep 13 00:06:07.295000 audit[5828]: USER_START pid=5828 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:07.298000 audit[5846]: CRED_ACQ pid=5846 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:07.334033 kubelet[3053]: I0913 00:06:07.333720 3053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-656c987b58-8tccj" podStartSLOduration=6.889064026 podStartE2EDuration="19.333689412s" podCreationTimestamp="2025-09-13 00:05:48 +0000 UTC" firstStartedPulling="2025-09-13 00:05:51.752674914 +0000 UTC m=+57.469975096" lastFinishedPulling="2025-09-13 00:06:04.197300265 +0000 UTC m=+69.914600482" observedRunningTime="2025-09-13 00:06:05.426578687 +0000 UTC m=+71.143878928" watchObservedRunningTime="2025-09-13 00:06:07.333689412 +0000 UTC m=+73.050989581" Sep 13 00:06:07.563485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2946807303.mount: Deactivated successfully. Sep 13 00:06:07.608737 sshd[5828]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:07.610000 audit[5828]: USER_END pid=5828 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:07.610000 audit[5828]: CRED_DISP pid=5828 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:07.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.29.1:22-139.178.89.65:40672 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:07.613727 systemd[1]: sshd@11-172.31.29.1:22-139.178.89.65:40672.service: Deactivated successfully. Sep 13 00:06:07.615838 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:06:07.615896 systemd-logind[1911]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:06:07.619094 systemd-logind[1911]: Removed session 12. Sep 13 00:06:08.677866 env[1920]: time="2025-09-13T00:06:08.677802196Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:08.683867 env[1920]: time="2025-09-13T00:06:08.683811578Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:08.687293 env[1920]: time="2025-09-13T00:06:08.687244093Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:08.690698 env[1920]: time="2025-09-13T00:06:08.690645753Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:08.691998 env[1920]: time="2025-09-13T00:06:08.691897776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 13 00:06:08.697900 env[1920]: time="2025-09-13T00:06:08.697352960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:06:08.699309 env[1920]: time="2025-09-13T00:06:08.699239431Z" level=info msg="CreateContainer within sandbox \"5a3336a46a17f3b65ca66b0f81401f8f5c199a9311e56535c600ff6a80093d45\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:06:08.739130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1413991124.mount: Deactivated successfully. Sep 13 00:06:08.747558 env[1920]: time="2025-09-13T00:06:08.747491499Z" level=info msg="CreateContainer within sandbox \"5a3336a46a17f3b65ca66b0f81401f8f5c199a9311e56535c600ff6a80093d45\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2be5625fcaee790eda909f45b0c4efacdce04c5c77d971509d82c1cfa8ae118a\"" Sep 13 00:06:08.751810 env[1920]: time="2025-09-13T00:06:08.751663446Z" level=info msg="StartContainer for \"2be5625fcaee790eda909f45b0c4efacdce04c5c77d971509d82c1cfa8ae118a\"" Sep 13 00:06:09.007269 env[1920]: time="2025-09-13T00:06:09.007126260Z" level=info msg="StartContainer for \"2be5625fcaee790eda909f45b0c4efacdce04c5c77d971509d82c1cfa8ae118a\" returns successfully" Sep 13 00:06:09.525000 audit[5902]: NETFILTER_CFG table=filter:124 family=2 entries=10 op=nft_register_rule pid=5902 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:06:09.525000 audit[5902]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffc5031490 a2=0 a3=1 items=0 ppid=3158 pid=5902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:09.525000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:06:09.532000 audit[5902]: NETFILTER_CFG table=nat:125 family=2 entries=24 op=nft_register_rule pid=5902 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:06:09.532000 audit[5902]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7308 a0=3 a1=ffffc5031490 a2=0 a3=1 items=0 ppid=3158 pid=5902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:09.532000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:06:10.388275 systemd[1]: run-containerd-runc-k8s.io-2be5625fcaee790eda909f45b0c4efacdce04c5c77d971509d82c1cfa8ae118a-runc.SGGSer.mount: Deactivated successfully. Sep 13 00:06:10.533057 systemd[1]: run-containerd-runc-k8s.io-2be5625fcaee790eda909f45b0c4efacdce04c5c77d971509d82c1cfa8ae118a-runc.08f7ZE.mount: Deactivated successfully. Sep 13 00:06:11.510945 systemd[1]: run-containerd-runc-k8s.io-2be5625fcaee790eda909f45b0c4efacdce04c5c77d971509d82c1cfa8ae118a-runc.VTWplk.mount: Deactivated successfully. Sep 13 00:06:12.044638 env[1920]: time="2025-09-13T00:06:12.044547303Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:12.049335 env[1920]: time="2025-09-13T00:06:12.049254480Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:12.053507 env[1920]: time="2025-09-13T00:06:12.053432915Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:12.057559 env[1920]: time="2025-09-13T00:06:12.057488535Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:12.058760 env[1920]: time="2025-09-13T00:06:12.058710308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 13 00:06:12.064400 env[1920]: time="2025-09-13T00:06:12.064325234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:06:12.102125 env[1920]: time="2025-09-13T00:06:12.101881000Z" level=info msg="CreateContainer within sandbox \"597b77902a13bf16c6747710c077f79062a77c0536217fbf7e305e81f56c3468\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:06:12.142285 env[1920]: time="2025-09-13T00:06:12.142198753Z" level=info msg="CreateContainer within sandbox \"597b77902a13bf16c6747710c077f79062a77c0536217fbf7e305e81f56c3468\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6806233a4bf6cf4664d483b78f05e38ca0df1f89677b8c29f388fc3191b96fea\"" Sep 13 00:06:12.142964 env[1920]: time="2025-09-13T00:06:12.142915642Z" level=info msg="StartContainer for \"6806233a4bf6cf4664d483b78f05e38ca0df1f89677b8c29f388fc3191b96fea\"" Sep 13 00:06:12.290276 env[1920]: time="2025-09-13T00:06:12.290190279Z" level=info msg="StartContainer for \"6806233a4bf6cf4664d483b78f05e38ca0df1f89677b8c29f388fc3191b96fea\" returns successfully" Sep 13 00:06:12.486135 kubelet[3053]: I0913 00:06:12.485536 3053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-6v2qn" podStartSLOduration=36.706254572 podStartE2EDuration="49.48551526s" podCreationTimestamp="2025-09-13 00:05:23 +0000 UTC" firstStartedPulling="2025-09-13 00:05:55.915559664 +0000 UTC m=+61.632859833" lastFinishedPulling="2025-09-13 00:06:08.694820364 +0000 UTC m=+74.412120521" observedRunningTime="2025-09-13 00:06:09.465585012 +0000 UTC m=+75.182885181" watchObservedRunningTime="2025-09-13 00:06:12.48551526 +0000 UTC m=+78.202815417" Sep 13 00:06:12.498294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1309787995.mount: Deactivated successfully. Sep 13 00:06:12.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.29.1:22-139.178.89.65:39326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:12.638585 systemd[1]: Started sshd@12-172.31.29.1:22-139.178.89.65:39326.service. Sep 13 00:06:12.641476 kernel: kauditd_printk_skb: 35 callbacks suppressed Sep 13 00:06:12.641635 kernel: audit: type=1130 audit(1757721972.637:470): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.29.1:22-139.178.89.65:39326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:12.659219 kubelet[3053]: I0913 00:06:12.659131 3053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6b97f6f77f-zv7t8" podStartSLOduration=33.133789457 podStartE2EDuration="48.659107135s" podCreationTimestamp="2025-09-13 00:05:24 +0000 UTC" firstStartedPulling="2025-09-13 00:05:56.535676182 +0000 UTC m=+62.252976351" lastFinishedPulling="2025-09-13 00:06:12.06099386 +0000 UTC m=+77.778294029" observedRunningTime="2025-09-13 00:06:12.501830756 +0000 UTC m=+78.219130937" watchObservedRunningTime="2025-09-13 00:06:12.659107135 +0000 UTC m=+78.376407304" Sep 13 00:06:12.832000 audit[6032]: USER_ACCT pid=6032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:12.835185 sshd[6032]: Accepted publickey for core from 139.178.89.65 port 39326 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:06:12.840461 sshd[6032]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:12.832000 audit[6032]: CRED_ACQ pid=6032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:12.853309 kernel: audit: type=1101 audit(1757721972.832:471): pid=6032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:12.853511 kernel: audit: type=1103 audit(1757721972.832:472): pid=6032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:12.860159 kernel: audit: type=1006 audit(1757721972.832:473): pid=6032 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Sep 13 00:06:12.832000 audit[6032]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff068ed20 a2=3 a3=1 items=0 ppid=1 pid=6032 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:12.870357 kernel: audit: type=1300 audit(1757721972.832:473): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff068ed20 a2=3 a3=1 items=0 ppid=1 pid=6032 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:12.832000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:06:12.874989 kernel: audit: type=1327 audit(1757721972.832:473): proctitle=737368643A20636F7265205B707269765D Sep 13 00:06:12.876964 systemd-logind[1911]: New session 13 of user core. Sep 13 00:06:12.885979 systemd[1]: Started session-13.scope. Sep 13 00:06:12.898000 audit[6032]: USER_START pid=6032 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:12.913176 kernel: audit: type=1105 audit(1757721972.898:474): pid=6032 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:12.913310 kernel: audit: type=1103 audit(1757721972.910:475): pid=6035 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:12.910000 audit[6035]: CRED_ACQ pid=6035 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:13.190305 sshd[6032]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:13.192000 audit[6032]: USER_END pid=6032 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:13.198729 systemd[1]: sshd@12-172.31.29.1:22-139.178.89.65:39326.service: Deactivated successfully. Sep 13 00:06:13.200236 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:06:13.194000 audit[6032]: CRED_DISP pid=6032 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:13.207191 systemd-logind[1911]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:06:13.209305 systemd-logind[1911]: Removed session 13. Sep 13 00:06:13.214742 kernel: audit: type=1106 audit(1757721973.192:476): pid=6032 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:13.214922 kernel: audit: type=1104 audit(1757721973.194:477): pid=6032 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:13.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.29.1:22-139.178.89.65:39326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:13.985914 env[1920]: time="2025-09-13T00:06:13.985858700Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:13.990576 env[1920]: time="2025-09-13T00:06:13.990523183Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:13.993382 env[1920]: time="2025-09-13T00:06:13.993329040Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:13.997394 env[1920]: time="2025-09-13T00:06:13.997340958Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:06:13.998119 env[1920]: time="2025-09-13T00:06:13.998074611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 13 00:06:14.005287 env[1920]: time="2025-09-13T00:06:14.005162170Z" level=info msg="CreateContainer within sandbox \"20f725260c00a448e75c55331e52e41c73477b7f310b154ae308b79c186f2e4a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:06:14.041557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2021193705.mount: Deactivated successfully. Sep 13 00:06:14.044238 env[1920]: time="2025-09-13T00:06:14.044152056Z" level=info msg="CreateContainer within sandbox \"20f725260c00a448e75c55331e52e41c73477b7f310b154ae308b79c186f2e4a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"acf1d831a5083e5cc9428e09a3e7e9833caadaf7752eeeb106b78a0e269a4b67\"" Sep 13 00:06:14.045567 env[1920]: time="2025-09-13T00:06:14.045500072Z" level=info msg="StartContainer for \"acf1d831a5083e5cc9428e09a3e7e9833caadaf7752eeeb106b78a0e269a4b67\"" Sep 13 00:06:14.145125 systemd[1]: run-containerd-runc-k8s.io-acf1d831a5083e5cc9428e09a3e7e9833caadaf7752eeeb106b78a0e269a4b67-runc.lYdO1Z.mount: Deactivated successfully. Sep 13 00:06:14.299389 env[1920]: time="2025-09-13T00:06:14.299324061Z" level=info msg="StartContainer for \"acf1d831a5083e5cc9428e09a3e7e9833caadaf7752eeeb106b78a0e269a4b67\" returns successfully" Sep 13 00:06:14.851294 kubelet[3053]: I0913 00:06:14.851240 3053 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:06:14.852385 kubelet[3053]: I0913 00:06:14.852330 3053 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:06:18.219083 systemd[1]: Started sshd@13-172.31.29.1:22-139.178.89.65:39332.service. Sep 13 00:06:18.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.29.1:22-139.178.89.65:39332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:18.222752 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:06:18.222937 kernel: audit: type=1130 audit(1757721978.219:479): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.29.1:22-139.178.89.65:39332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:18.402000 audit[6094]: USER_ACCT pid=6094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:18.412888 sshd[6094]: Accepted publickey for core from 139.178.89.65 port 39332 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:06:18.413901 kernel: audit: type=1101 audit(1757721978.402:480): pid=6094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:18.415000 audit[6094]: CRED_ACQ pid=6094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:18.427154 sshd[6094]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:18.427914 kernel: audit: type=1103 audit(1757721978.415:481): pid=6094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:18.428026 kernel: audit: type=1006 audit(1757721978.415:482): pid=6094 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Sep 13 00:06:18.415000 audit[6094]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd72a24b0 a2=3 a3=1 items=0 ppid=1 pid=6094 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:18.443879 kernel: audit: type=1300 audit(1757721978.415:482): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd72a24b0 a2=3 a3=1 items=0 ppid=1 pid=6094 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:18.444123 kernel: audit: type=1327 audit(1757721978.415:482): proctitle=737368643A20636F7265205B707269765D Sep 13 00:06:18.415000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:06:18.456439 systemd-logind[1911]: New session 14 of user core. Sep 13 00:06:18.458433 systemd[1]: Started session-14.scope. Sep 13 00:06:18.492000 audit[6094]: USER_START pid=6094 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:18.506000 audit[6097]: CRED_ACQ pid=6097 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:18.509248 kernel: audit: type=1105 audit(1757721978.492:483): pid=6094 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:18.509410 kernel: audit: type=1103 audit(1757721978.506:484): pid=6097 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:18.841605 sshd[6094]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:18.843000 audit[6094]: USER_END pid=6094 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:18.845000 audit[6094]: CRED_DISP pid=6094 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:18.857077 systemd[1]: sshd@13-172.31.29.1:22-139.178.89.65:39332.service: Deactivated successfully. Sep 13 00:06:18.865842 kernel: audit: type=1106 audit(1757721978.843:485): pid=6094 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:18.866020 kernel: audit: type=1104 audit(1757721978.845:486): pid=6094 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:18.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.29.1:22-139.178.89.65:39332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:18.867172 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:06:18.867985 systemd-logind[1911]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:06:18.870484 systemd-logind[1911]: Removed session 14. Sep 13 00:06:21.108693 kubelet[3053]: I0913 00:06:21.108627 3053 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:06:21.152144 kubelet[3053]: I0913 00:06:21.152060 3053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cdkj7" podStartSLOduration=38.418421292 podStartE2EDuration="58.152033654s" podCreationTimestamp="2025-09-13 00:05:23 +0000 UTC" firstStartedPulling="2025-09-13 00:05:54.267249229 +0000 UTC m=+59.984549386" lastFinishedPulling="2025-09-13 00:06:14.000861567 +0000 UTC m=+79.718161748" observedRunningTime="2025-09-13 00:06:14.484216499 +0000 UTC m=+80.201516704" watchObservedRunningTime="2025-09-13 00:06:21.152033654 +0000 UTC m=+86.869333823" Sep 13 00:06:21.215000 audit[6107]: NETFILTER_CFG table=filter:126 family=2 entries=10 op=nft_register_rule pid=6107 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:06:21.215000 audit[6107]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffcb35d120 a2=0 a3=1 items=0 ppid=3158 pid=6107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:21.215000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:06:21.225000 audit[6107]: NETFILTER_CFG table=nat:127 family=2 entries=36 op=nft_register_chain pid=6107 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:06:21.225000 audit[6107]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=12004 a0=3 a1=ffffcb35d120 a2=0 a3=1 items=0 ppid=3158 pid=6107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:21.225000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:06:23.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.29.1:22-139.178.89.65:33254 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:23.866363 systemd[1]: Started sshd@14-172.31.29.1:22-139.178.89.65:33254.service. Sep 13 00:06:23.869004 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 13 00:06:23.869110 kernel: audit: type=1130 audit(1757721983.865:490): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.29.1:22-139.178.89.65:33254 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:24.050000 audit[6108]: USER_ACCT pid=6108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:24.056339 sshd[6108]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:24.057598 sshd[6108]: Accepted publickey for core from 139.178.89.65 port 33254 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:06:24.054000 audit[6108]: CRED_ACQ pid=6108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:24.073218 kernel: audit: type=1101 audit(1757721984.050:491): pid=6108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:24.073490 kernel: audit: type=1103 audit(1757721984.054:492): pid=6108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:24.073875 kernel: audit: type=1006 audit(1757721984.054:493): pid=6108 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Sep 13 00:06:24.087570 systemd[1]: Started session-15.scope. Sep 13 00:06:24.054000 audit[6108]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffec3cf7d0 a2=3 a3=1 items=0 ppid=1 pid=6108 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:24.091382 systemd-logind[1911]: New session 15 of user core. Sep 13 00:06:24.099340 kernel: audit: type=1300 audit(1757721984.054:493): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffec3cf7d0 a2=3 a3=1 items=0 ppid=1 pid=6108 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:24.115670 kernel: audit: type=1327 audit(1757721984.054:493): proctitle=737368643A20636F7265205B707269765D Sep 13 00:06:24.054000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:06:24.111000 audit[6108]: USER_START pid=6108 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:24.129210 kernel: audit: type=1105 audit(1757721984.111:494): pid=6108 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:24.116000 audit[6111]: CRED_ACQ pid=6111 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:24.139092 kernel: audit: type=1103 audit(1757721984.116:495): pid=6111 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:24.557136 sshd[6108]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:24.559000 audit[6108]: USER_END pid=6108 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:24.564918 systemd-logind[1911]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:06:24.568803 systemd[1]: sshd@14-172.31.29.1:22-139.178.89.65:33254.service: Deactivated successfully. Sep 13 00:06:24.570490 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:06:24.581139 systemd-logind[1911]: Removed session 15. Sep 13 00:06:24.585830 kernel: audit: type=1106 audit(1757721984.559:496): pid=6108 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:24.559000 audit[6108]: CRED_DISP pid=6108 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:24.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.29.1:22-139.178.89.65:33254 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:24.606813 kernel: audit: type=1104 audit(1757721984.559:497): pid=6108 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:29.584734 systemd[1]: Started sshd@15-172.31.29.1:22-139.178.89.65:33264.service. Sep 13 00:06:29.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.29.1:22-139.178.89.65:33264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.587418 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:06:29.587574 kernel: audit: type=1130 audit(1757721989.584:499): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.29.1:22-139.178.89.65:33264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:29.778000 audit[6122]: USER_ACCT pid=6122 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:29.779980 sshd[6122]: Accepted publickey for core from 139.178.89.65 port 33264 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:06:29.784125 sshd[6122]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:29.778000 audit[6122]: CRED_ACQ pid=6122 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:29.802043 kernel: audit: type=1101 audit(1757721989.778:500): pid=6122 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:29.802207 kernel: audit: type=1103 audit(1757721989.778:501): pid=6122 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:29.808841 kernel: audit: type=1006 audit(1757721989.778:502): pid=6122 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Sep 13 00:06:29.778000 audit[6122]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcc6988b0 a2=3 a3=1 items=0 ppid=1 pid=6122 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:29.819950 kernel: audit: type=1300 audit(1757721989.778:502): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcc6988b0 a2=3 a3=1 items=0 ppid=1 pid=6122 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:29.778000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:06:29.825406 kernel: audit: type=1327 audit(1757721989.778:502): proctitle=737368643A20636F7265205B707269765D Sep 13 00:06:29.827908 systemd-logind[1911]: New session 16 of user core. Sep 13 00:06:29.829111 systemd[1]: Started session-16.scope. Sep 13 00:06:29.842000 audit[6122]: USER_START pid=6122 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:29.855000 audit[6125]: CRED_ACQ pid=6125 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:29.865984 kernel: audit: type=1105 audit(1757721989.842:503): pid=6122 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:29.866204 kernel: audit: type=1103 audit(1757721989.855:504): pid=6125 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:30.157172 sshd[6122]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:30.158000 audit[6122]: USER_END pid=6122 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:30.159000 audit[6122]: CRED_DISP pid=6122 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:30.174960 kernel: audit: type=1106 audit(1757721990.158:505): pid=6122 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:30.174700 systemd[1]: sshd@15-172.31.29.1:22-139.178.89.65:33264.service: Deactivated successfully. Sep 13 00:06:30.176352 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:06:30.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.29.1:22-139.178.89.65:33264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:30.186812 kernel: audit: type=1104 audit(1757721990.159:506): pid=6122 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:30.187183 systemd-logind[1911]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:06:30.190100 systemd-logind[1911]: Removed session 16. Sep 13 00:06:35.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.29.1:22-139.178.89.65:46230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:35.182267 systemd[1]: Started sshd@16-172.31.29.1:22-139.178.89.65:46230.service. Sep 13 00:06:35.184904 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:06:35.184987 kernel: audit: type=1130 audit(1757721995.180:508): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.29.1:22-139.178.89.65:46230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:35.360000 audit[6140]: USER_ACCT pid=6140 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:35.365284 sshd[6140]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:35.373343 sshd[6140]: Accepted publickey for core from 139.178.89.65 port 46230 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:06:35.384047 kernel: audit: type=1101 audit(1757721995.360:509): pid=6140 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:35.384217 kernel: audit: type=1103 audit(1757721995.362:510): pid=6140 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:35.362000 audit[6140]: CRED_ACQ pid=6140 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:35.376315 systemd[1]: Started session-17.scope. Sep 13 00:06:35.378809 systemd-logind[1911]: New session 17 of user core. Sep 13 00:06:35.400042 kernel: audit: type=1006 audit(1757721995.362:511): pid=6140 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Sep 13 00:06:35.362000 audit[6140]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffc4eea00 a2=3 a3=1 items=0 ppid=1 pid=6140 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:35.414441 kernel: audit: type=1300 audit(1757721995.362:511): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffc4eea00 a2=3 a3=1 items=0 ppid=1 pid=6140 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:35.362000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:06:35.418471 kernel: audit: type=1327 audit(1757721995.362:511): proctitle=737368643A20636F7265205B707269765D Sep 13 00:06:35.391000 audit[6140]: USER_START pid=6140 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:35.394000 audit[6143]: CRED_ACQ pid=6143 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:35.439541 kernel: audit: type=1105 audit(1757721995.391:512): pid=6140 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:35.439712 kernel: audit: type=1103 audit(1757721995.394:513): pid=6143 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:35.667490 sshd[6140]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:35.667000 audit[6140]: USER_END pid=6140 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:35.673872 systemd[1]: sshd@16-172.31.29.1:22-139.178.89.65:46230.service: Deactivated successfully. Sep 13 00:06:35.676037 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:06:35.682753 systemd-logind[1911]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:06:35.669000 audit[6140]: CRED_DISP pid=6140 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:35.693885 kernel: audit: type=1106 audit(1757721995.667:514): pid=6140 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:35.694077 kernel: audit: type=1104 audit(1757721995.669:515): pid=6140 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:35.695713 systemd-logind[1911]: Removed session 17. Sep 13 00:06:35.698552 systemd[1]: Started sshd@17-172.31.29.1:22-139.178.89.65:46234.service. Sep 13 00:06:35.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.29.1:22-139.178.89.65:46230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:35.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.29.1:22-139.178.89.65:46234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:35.880000 audit[6153]: USER_ACCT pid=6153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:35.883006 sshd[6153]: Accepted publickey for core from 139.178.89.65 port 46234 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:06:35.883000 audit[6153]: CRED_ACQ pid=6153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:35.883000 audit[6153]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff25dc7c0 a2=3 a3=1 items=0 ppid=1 pid=6153 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:35.883000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:06:35.886871 sshd[6153]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:35.896893 systemd-logind[1911]: New session 18 of user core. Sep 13 00:06:35.898588 systemd[1]: Started session-18.scope. Sep 13 00:06:35.915000 audit[6153]: USER_START pid=6153 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:35.919000 audit[6156]: CRED_ACQ pid=6156 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:36.554286 sshd[6153]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:36.554000 audit[6153]: USER_END pid=6153 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:36.554000 audit[6153]: CRED_DISP pid=6153 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:36.559328 systemd[1]: sshd@17-172.31.29.1:22-139.178.89.65:46234.service: Deactivated successfully. Sep 13 00:06:36.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.29.1:22-139.178.89.65:46234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:36.561596 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:06:36.561602 systemd-logind[1911]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:06:36.566739 systemd-logind[1911]: Removed session 18. Sep 13 00:06:36.576531 systemd[1]: Started sshd@18-172.31.29.1:22-139.178.89.65:46248.service. Sep 13 00:06:36.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.29.1:22-139.178.89.65:46248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:36.658712 systemd[1]: run-containerd-runc-k8s.io-2be5625fcaee790eda909f45b0c4efacdce04c5c77d971509d82c1cfa8ae118a-runc.jeXi8j.mount: Deactivated successfully. Sep 13 00:06:36.778000 audit[6166]: USER_ACCT pid=6166 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:36.780865 sshd[6166]: Accepted publickey for core from 139.178.89.65 port 46248 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:06:36.782000 audit[6166]: CRED_ACQ pid=6166 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:36.782000 audit[6166]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff7f0bcd0 a2=3 a3=1 items=0 ppid=1 pid=6166 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:36.782000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:06:36.785073 sshd[6166]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:36.831370 systemd-logind[1911]: New session 19 of user core. Sep 13 00:06:36.836652 systemd[1]: Started session-19.scope. Sep 13 00:06:36.858000 audit[6166]: USER_START pid=6166 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:36.861000 audit[6204]: CRED_ACQ pid=6204 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:37.026000 audit[6228]: NETFILTER_CFG table=filter:128 family=2 entries=9 op=nft_register_rule pid=6228 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:06:37.026000 audit[6228]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffd1b153a0 a2=0 a3=1 items=0 ppid=3158 pid=6228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:37.026000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:06:37.040000 audit[6228]: NETFILTER_CFG table=nat:129 family=2 entries=31 op=nft_register_chain pid=6228 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:06:37.040000 audit[6228]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10884 a0=3 a1=ffffd1b153a0 a2=0 a3=1 items=0 ppid=3158 pid=6228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:37.040000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:06:40.496850 kernel: kauditd_printk_skb: 26 callbacks suppressed Sep 13 00:06:40.497068 kernel: audit: type=1325 audit(1757722000.490:534): table=filter:130 family=2 entries=8 op=nft_register_rule pid=6243 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:06:40.490000 audit[6243]: NETFILTER_CFG table=filter:130 family=2 entries=8 op=nft_register_rule pid=6243 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:06:40.490000 audit[6243]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffc5dd27a0 a2=0 a3=1 items=0 ppid=3158 pid=6243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:40.513543 kernel: audit: type=1300 audit(1757722000.490:534): arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffc5dd27a0 a2=0 a3=1 items=0 ppid=3158 pid=6243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:40.490000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:06:40.519405 kernel: audit: type=1327 audit(1757722000.490:534): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:06:40.518000 audit[6243]: NETFILTER_CFG table=nat:131 family=2 entries=26 op=nft_register_rule pid=6243 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:06:40.528419 kernel: audit: type=1325 audit(1757722000.518:535): table=nat:131 family=2 entries=26 op=nft_register_rule pid=6243 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:06:40.530239 sshd[6166]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:40.518000 audit[6243]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8076 a0=3 a1=ffffc5dd27a0 a2=0 a3=1 items=0 ppid=3158 pid=6243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:40.551445 systemd[1]: Started sshd@19-172.31.29.1:22-139.178.89.65:54386.service. Sep 13 00:06:40.559159 kernel: audit: type=1300 audit(1757722000.518:535): arch=c00000b7 syscall=211 success=yes exit=8076 a0=3 a1=ffffc5dd27a0 a2=0 a3=1 items=0 ppid=3158 pid=6243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:40.568226 systemd[1]: sshd@18-172.31.29.1:22-139.178.89.65:46248.service: Deactivated successfully. Sep 13 00:06:40.576528 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:06:40.580149 systemd-logind[1911]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:06:40.518000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:06:40.600322 kernel: audit: type=1327 audit(1757722000.518:535): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:06:40.601147 systemd-logind[1911]: Removed session 19. Sep 13 00:06:40.534000 audit[6166]: USER_END pid=6166 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:40.615821 kernel: audit: type=1106 audit(1757722000.534:536): pid=6166 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:40.534000 audit[6166]: CRED_DISP pid=6166 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:40.636469 kernel: audit: type=1104 audit(1757722000.534:537): pid=6166 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:40.642049 kernel: audit: type=1130 audit(1757722000.551:538): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.29.1:22-139.178.89.65:54386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:40.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.29.1:22-139.178.89.65:54386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:40.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.29.1:22-139.178.89.65:46248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:40.656300 kernel: audit: type=1131 audit(1757722000.569:539): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.29.1:22-139.178.89.65:46248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:40.690000 audit[6249]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=6249 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:06:40.690000 audit[6249]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11944 a0=3 a1=fffffd1c24e0 a2=0 a3=1 items=0 ppid=3158 pid=6249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:40.690000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:06:40.791715 sshd[6245]: Accepted publickey for core from 139.178.89.65 port 54386 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:06:40.789000 audit[6245]: USER_ACCT pid=6245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:40.792000 audit[6245]: CRED_ACQ pid=6245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:40.792000 audit[6245]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff628b9c0 a2=3 a3=1 items=0 ppid=1 pid=6245 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:40.792000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:06:40.795300 sshd[6245]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:40.806595 systemd-logind[1911]: New session 20 of user core. Sep 13 00:06:40.808668 systemd[1]: Started session-20.scope. Sep 13 00:06:40.826000 audit[6245]: USER_START pid=6245 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:40.830000 audit[6251]: CRED_ACQ pid=6251 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:40.854000 audit[6249]: NETFILTER_CFG table=nat:133 family=2 entries=26 op=nft_register_rule pid=6249 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:06:40.854000 audit[6249]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8076 a0=3 a1=fffffd1c24e0 a2=0 a3=1 items=0 ppid=3158 pid=6249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:40.854000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:06:41.378158 sshd[6245]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:41.379000 audit[6245]: USER_END pid=6245 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:41.379000 audit[6245]: CRED_DISP pid=6245 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:41.385054 systemd[1]: sshd@19-172.31.29.1:22-139.178.89.65:54386.service: Deactivated successfully. Sep 13 00:06:41.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.29.1:22-139.178.89.65:54386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:41.386880 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:06:41.388394 systemd-logind[1911]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:06:41.394120 systemd-logind[1911]: Removed session 20. Sep 13 00:06:41.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.29.1:22-139.178.89.65:54390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:41.404624 systemd[1]: Started sshd@20-172.31.29.1:22-139.178.89.65:54390.service. Sep 13 00:06:41.579000 audit[6259]: USER_ACCT pid=6259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:41.580094 sshd[6259]: Accepted publickey for core from 139.178.89.65 port 54390 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:06:41.581000 audit[6259]: CRED_ACQ pid=6259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:41.581000 audit[6259]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffef2280a0 a2=3 a3=1 items=0 ppid=1 pid=6259 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:41.581000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:06:41.583421 sshd[6259]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:41.594204 systemd[1]: Started session-21.scope. Sep 13 00:06:41.595127 systemd-logind[1911]: New session 21 of user core. Sep 13 00:06:41.611000 audit[6259]: USER_START pid=6259 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:41.614000 audit[6262]: CRED_ACQ pid=6262 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:41.854577 sshd[6259]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:41.856000 audit[6259]: USER_END pid=6259 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:41.857000 audit[6259]: CRED_DISP pid=6259 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:41.860731 systemd[1]: sshd@20-172.31.29.1:22-139.178.89.65:54390.service: Deactivated successfully. Sep 13 00:06:41.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.29.1:22-139.178.89.65:54390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:41.863198 systemd-logind[1911]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:06:41.863262 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:06:41.867724 systemd-logind[1911]: Removed session 21. Sep 13 00:06:46.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.29.1:22-139.178.89.65:54400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:46.882389 systemd[1]: Started sshd@21-172.31.29.1:22-139.178.89.65:54400.service. Sep 13 00:06:46.884822 kernel: kauditd_printk_skb: 27 callbacks suppressed Sep 13 00:06:46.884893 kernel: audit: type=1130 audit(1757722006.882:559): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.29.1:22-139.178.89.65:54400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:47.059000 audit[6272]: USER_ACCT pid=6272 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:47.067619 sshd[6272]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:47.068904 sshd[6272]: Accepted publickey for core from 139.178.89.65 port 54400 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:06:47.064000 audit[6272]: CRED_ACQ pid=6272 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:47.088812 kernel: audit: type=1101 audit(1757722007.059:560): pid=6272 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:47.089693 kernel: audit: type=1103 audit(1757722007.064:561): pid=6272 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:47.089763 kernel: audit: type=1006 audit(1757722007.064:562): pid=6272 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Sep 13 00:06:47.091168 kernel: audit: type=1300 audit(1757722007.064:562): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcca31210 a2=3 a3=1 items=0 ppid=1 pid=6272 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:47.064000 audit[6272]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcca31210 a2=3 a3=1 items=0 ppid=1 pid=6272 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:47.064000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:06:47.105204 kernel: audit: type=1327 audit(1757722007.064:562): proctitle=737368643A20636F7265205B707269765D Sep 13 00:06:47.105874 systemd-logind[1911]: New session 22 of user core. Sep 13 00:06:47.107347 systemd[1]: Started session-22.scope. Sep 13 00:06:47.118000 audit[6272]: USER_START pid=6272 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:47.132491 kernel: audit: type=1105 audit(1757722007.118:563): pid=6272 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:47.132622 kernel: audit: type=1103 audit(1757722007.130:564): pid=6275 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:47.130000 audit[6275]: CRED_ACQ pid=6275 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:47.475159 sshd[6272]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:47.477000 audit[6272]: USER_END pid=6272 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:47.492902 systemd-logind[1911]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:06:47.495941 systemd[1]: sshd@21-172.31.29.1:22-139.178.89.65:54400.service: Deactivated successfully. Sep 13 00:06:47.497588 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:06:47.500089 systemd-logind[1911]: Removed session 22. Sep 13 00:06:47.489000 audit[6272]: CRED_DISP pid=6272 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:47.522854 kernel: audit: type=1106 audit(1757722007.477:565): pid=6272 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:47.522983 kernel: audit: type=1104 audit(1757722007.489:566): pid=6272 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:47.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.29.1:22-139.178.89.65:54400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:48.888000 audit[6285]: NETFILTER_CFG table=filter:134 family=2 entries=20 op=nft_register_rule pid=6285 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:06:48.888000 audit[6285]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffe5eec030 a2=0 a3=1 items=0 ppid=3158 pid=6285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:48.888000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:06:48.910000 audit[6285]: NETFILTER_CFG table=nat:135 family=2 entries=110 op=nft_register_chain pid=6285 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:06:48.910000 audit[6285]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=50988 a0=3 a1=ffffe5eec030 a2=0 a3=1 items=0 ppid=3158 pid=6285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:48.910000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:06:52.502336 systemd[1]: Started sshd@22-172.31.29.1:22-139.178.89.65:48872.service. Sep 13 00:06:52.513904 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 13 00:06:52.514066 kernel: audit: type=1130 audit(1757722012.502:570): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.29.1:22-139.178.89.65:48872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:52.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.29.1:22-139.178.89.65:48872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:52.699495 sshd[6287]: Accepted publickey for core from 139.178.89.65 port 48872 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:06:52.698000 audit[6287]: USER_ACCT pid=6287 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:52.711229 sshd[6287]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:52.709000 audit[6287]: CRED_ACQ pid=6287 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:52.723435 kernel: audit: type=1101 audit(1757722012.698:571): pid=6287 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:52.723590 kernel: audit: type=1103 audit(1757722012.709:572): pid=6287 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:52.732881 kernel: audit: type=1006 audit(1757722012.709:573): pid=6287 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Sep 13 00:06:52.709000 audit[6287]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc4060de0 a2=3 a3=1 items=0 ppid=1 pid=6287 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:52.745758 kernel: audit: type=1300 audit(1757722012.709:573): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc4060de0 a2=3 a3=1 items=0 ppid=1 pid=6287 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:52.748120 systemd-logind[1911]: New session 23 of user core. Sep 13 00:06:52.749074 systemd[1]: Started session-23.scope. Sep 13 00:06:52.709000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:06:52.754156 kernel: audit: type=1327 audit(1757722012.709:573): proctitle=737368643A20636F7265205B707269765D Sep 13 00:06:52.766000 audit[6287]: USER_START pid=6287 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:52.779000 audit[6290]: CRED_ACQ pid=6290 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:52.786972 kernel: audit: type=1105 audit(1757722012.766:574): pid=6287 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:52.797878 kernel: audit: type=1103 audit(1757722012.779:575): pid=6290 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:53.109465 sshd[6287]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:53.111000 audit[6287]: USER_END pid=6287 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:53.126225 systemd[1]: sshd@22-172.31.29.1:22-139.178.89.65:48872.service: Deactivated successfully. Sep 13 00:06:53.129879 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:06:53.130268 systemd-logind[1911]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:06:53.139671 systemd-logind[1911]: Removed session 23. Sep 13 00:06:53.122000 audit[6287]: CRED_DISP pid=6287 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:53.151791 kernel: audit: type=1106 audit(1757722013.111:576): pid=6287 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:53.151920 kernel: audit: type=1104 audit(1757722013.122:577): pid=6287 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:53.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.29.1:22-139.178.89.65:48872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:58.027679 env[1920]: time="2025-09-13T00:06:58.026911900Z" level=info msg="StopPodSandbox for \"b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00\"" Sep 13 00:06:58.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.29.1:22-139.178.89.65:48874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:58.133690 systemd[1]: Started sshd@23-172.31.29.1:22-139.178.89.65:48874.service. Sep 13 00:06:58.136154 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:06:58.136253 kernel: audit: type=1130 audit(1757722018.133:579): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.29.1:22-139.178.89.65:48874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:58.254041 env[1920]: 2025-09-13 00:06:58.119 [WARNING][6311] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--1-k8s-calico--kube--controllers--6b97f6f77f--zv7t8-eth0", GenerateName:"calico-kube-controllers-6b97f6f77f-", Namespace:"calico-system", SelfLink:"", UID:"5015b2dd-c6f7-46e4-9af1-65597994d2b9", ResourceVersion:"1219", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b97f6f77f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-1", ContainerID:"597b77902a13bf16c6747710c077f79062a77c0536217fbf7e305e81f56c3468", Pod:"calico-kube-controllers-6b97f6f77f-zv7t8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali166f7934ca5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:06:58.254041 env[1920]: 2025-09-13 00:06:58.123 [INFO][6311] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" Sep 13 00:06:58.254041 env[1920]: 2025-09-13 00:06:58.123 [INFO][6311] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" iface="eth0" netns="" Sep 13 00:06:58.254041 env[1920]: 2025-09-13 00:06:58.123 [INFO][6311] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" Sep 13 00:06:58.254041 env[1920]: 2025-09-13 00:06:58.124 [INFO][6311] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" Sep 13 00:06:58.254041 env[1920]: 2025-09-13 00:06:58.223 [INFO][6318] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" HandleID="k8s-pod-network.b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" Workload="ip--172--31--29--1-k8s-calico--kube--controllers--6b97f6f77f--zv7t8-eth0" Sep 13 00:06:58.254041 env[1920]: 2025-09-13 00:06:58.224 [INFO][6318] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:06:58.254041 env[1920]: 2025-09-13 00:06:58.225 [INFO][6318] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:06:58.254041 env[1920]: 2025-09-13 00:06:58.239 [WARNING][6318] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" HandleID="k8s-pod-network.b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" Workload="ip--172--31--29--1-k8s-calico--kube--controllers--6b97f6f77f--zv7t8-eth0" Sep 13 00:06:58.254041 env[1920]: 2025-09-13 00:06:58.239 [INFO][6318] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" HandleID="k8s-pod-network.b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" Workload="ip--172--31--29--1-k8s-calico--kube--controllers--6b97f6f77f--zv7t8-eth0" Sep 13 00:06:58.254041 env[1920]: 2025-09-13 00:06:58.245 [INFO][6318] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:06:58.254041 env[1920]: 2025-09-13 00:06:58.249 [INFO][6311] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" Sep 13 00:06:58.255130 env[1920]: time="2025-09-13T00:06:58.255073488Z" level=info msg="TearDown network for sandbox \"b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00\" successfully" Sep 13 00:06:58.255268 env[1920]: time="2025-09-13T00:06:58.255234593Z" level=info msg="StopPodSandbox for \"b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00\" returns successfully" Sep 13 00:06:58.256258 env[1920]: time="2025-09-13T00:06:58.256186109Z" level=info msg="RemovePodSandbox for \"b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00\"" Sep 13 00:06:58.256730 env[1920]: time="2025-09-13T00:06:58.256647581Z" level=info msg="Forcibly stopping sandbox \"b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00\"" Sep 13 00:06:58.322000 audit[6320]: USER_ACCT pid=6320 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:58.325757 sshd[6320]: Accepted publickey for core from 139.178.89.65 port 48874 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:06:58.329006 sshd[6320]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:58.344233 systemd-logind[1911]: New session 24 of user core. Sep 13 00:06:58.346316 systemd[1]: Started session-24.scope. Sep 13 00:06:58.326000 audit[6320]: CRED_ACQ pid=6320 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:58.358742 kernel: audit: type=1101 audit(1757722018.322:580): pid=6320 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:58.358892 kernel: audit: type=1103 audit(1757722018.326:581): pid=6320 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:58.399946 kernel: audit: type=1006 audit(1757722018.326:582): pid=6320 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Sep 13 00:06:58.400088 kernel: audit: type=1300 audit(1757722018.326:582): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdc929430 a2=3 a3=1 items=0 ppid=1 pid=6320 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:58.400150 kernel: audit: type=1327 audit(1757722018.326:582): proctitle=737368643A20636F7265205B707269765D Sep 13 00:06:58.326000 audit[6320]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdc929430 a2=3 a3=1 items=0 ppid=1 pid=6320 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:06:58.326000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:06:58.368000 audit[6320]: USER_START pid=6320 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:58.412704 kernel: audit: type=1105 audit(1757722018.368:583): pid=6320 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:58.371000 audit[6340]: CRED_ACQ pid=6340 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:58.429999 kernel: audit: type=1103 audit(1757722018.371:584): pid=6340 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:58.641258 env[1920]: 2025-09-13 00:06:58.446 [WARNING][6334] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--1-k8s-calico--kube--controllers--6b97f6f77f--zv7t8-eth0", GenerateName:"calico-kube-controllers-6b97f6f77f-", Namespace:"calico-system", SelfLink:"", UID:"5015b2dd-c6f7-46e4-9af1-65597994d2b9", ResourceVersion:"1219", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b97f6f77f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-1", ContainerID:"597b77902a13bf16c6747710c077f79062a77c0536217fbf7e305e81f56c3468", Pod:"calico-kube-controllers-6b97f6f77f-zv7t8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali166f7934ca5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:06:58.641258 env[1920]: 2025-09-13 00:06:58.447 [INFO][6334] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" Sep 13 00:06:58.641258 env[1920]: 2025-09-13 00:06:58.447 [INFO][6334] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" iface="eth0" netns="" Sep 13 00:06:58.641258 env[1920]: 2025-09-13 00:06:58.447 [INFO][6334] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" Sep 13 00:06:58.641258 env[1920]: 2025-09-13 00:06:58.447 [INFO][6334] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" Sep 13 00:06:58.641258 env[1920]: 2025-09-13 00:06:58.582 [INFO][6344] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" HandleID="k8s-pod-network.b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" Workload="ip--172--31--29--1-k8s-calico--kube--controllers--6b97f6f77f--zv7t8-eth0" Sep 13 00:06:58.641258 env[1920]: 2025-09-13 00:06:58.583 [INFO][6344] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:06:58.641258 env[1920]: 2025-09-13 00:06:58.583 [INFO][6344] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:06:58.641258 env[1920]: 2025-09-13 00:06:58.616 [WARNING][6344] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" HandleID="k8s-pod-network.b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" Workload="ip--172--31--29--1-k8s-calico--kube--controllers--6b97f6f77f--zv7t8-eth0" Sep 13 00:06:58.641258 env[1920]: 2025-09-13 00:06:58.616 [INFO][6344] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" HandleID="k8s-pod-network.b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" Workload="ip--172--31--29--1-k8s-calico--kube--controllers--6b97f6f77f--zv7t8-eth0" Sep 13 00:06:58.641258 env[1920]: 2025-09-13 00:06:58.628 [INFO][6344] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:06:58.641258 env[1920]: 2025-09-13 00:06:58.636 [INFO][6334] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00" Sep 13 00:06:58.642442 env[1920]: time="2025-09-13T00:06:58.642371849Z" level=info msg="TearDown network for sandbox \"b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00\" successfully" Sep 13 00:06:58.656172 env[1920]: time="2025-09-13T00:06:58.656084353Z" level=info msg="RemovePodSandbox \"b93619d6976221de9855de25a91dcafe01415f9d70722c4cb09bcc4f10be9a00\" returns successfully" Sep 13 00:06:58.657306 env[1920]: time="2025-09-13T00:06:58.657260887Z" level=info msg="StopPodSandbox for \"dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec\"" Sep 13 00:06:58.786632 sshd[6320]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:58.794000 audit[6320]: USER_END pid=6320 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:58.803650 systemd[1]: sshd@23-172.31.29.1:22-139.178.89.65:48874.service: Deactivated successfully. Sep 13 00:06:58.805209 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:06:58.808200 systemd-logind[1911]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:06:58.811687 systemd-logind[1911]: Removed session 24. Sep 13 00:06:58.800000 audit[6320]: CRED_DISP pid=6320 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:58.833818 kernel: audit: type=1106 audit(1757722018.794:585): pid=6320 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:58.833984 kernel: audit: type=1104 audit(1757722018.800:586): pid=6320 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:06:58.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.29.1:22-139.178.89.65:48874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:06:59.025203 env[1920]: 2025-09-13 00:06:58.849 [WARNING][6364] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--1-k8s-coredns--7c65d6cfc9--fkdwg-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"42009314-cb57-4945-ba05-c28efb80272b", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 4, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-1", ContainerID:"7ed572b47446112f184bc5bc46147fe7e895e7e7aa9c784208191688f9526442", Pod:"coredns-7c65d6cfc9-fkdwg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali857449cc2f9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:06:59.025203 env[1920]: 2025-09-13 00:06:58.849 [INFO][6364] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" Sep 13 00:06:59.025203 env[1920]: 2025-09-13 00:06:58.849 [INFO][6364] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" iface="eth0" netns="" Sep 13 00:06:59.025203 env[1920]: 2025-09-13 00:06:58.849 [INFO][6364] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" Sep 13 00:06:59.025203 env[1920]: 2025-09-13 00:06:58.849 [INFO][6364] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" Sep 13 00:06:59.025203 env[1920]: 2025-09-13 00:06:58.957 [INFO][6374] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" HandleID="k8s-pod-network.dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" Workload="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--fkdwg-eth0" Sep 13 00:06:59.025203 env[1920]: 2025-09-13 00:06:58.958 [INFO][6374] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:06:59.025203 env[1920]: 2025-09-13 00:06:58.958 [INFO][6374] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:06:59.025203 env[1920]: 2025-09-13 00:06:58.986 [WARNING][6374] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" HandleID="k8s-pod-network.dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" Workload="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--fkdwg-eth0" Sep 13 00:06:59.025203 env[1920]: 2025-09-13 00:06:58.999 [INFO][6374] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" HandleID="k8s-pod-network.dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" Workload="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--fkdwg-eth0" Sep 13 00:06:59.025203 env[1920]: 2025-09-13 00:06:59.019 [INFO][6374] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:06:59.025203 env[1920]: 2025-09-13 00:06:59.021 [INFO][6364] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" Sep 13 00:06:59.025203 env[1920]: time="2025-09-13T00:06:59.024276098Z" level=info msg="TearDown network for sandbox \"dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec\" successfully" Sep 13 00:06:59.025203 env[1920]: time="2025-09-13T00:06:59.024324219Z" level=info msg="StopPodSandbox for \"dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec\" returns successfully" Sep 13 00:06:59.026873 env[1920]: time="2025-09-13T00:06:59.026695240Z" level=info msg="RemovePodSandbox for \"dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec\"" Sep 13 00:06:59.027163 env[1920]: time="2025-09-13T00:06:59.027061705Z" level=info msg="Forcibly stopping sandbox \"dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec\"" Sep 13 00:06:59.300167 env[1920]: 2025-09-13 00:06:59.193 [WARNING][6388] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--1-k8s-coredns--7c65d6cfc9--fkdwg-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"42009314-cb57-4945-ba05-c28efb80272b", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 4, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-1", ContainerID:"7ed572b47446112f184bc5bc46147fe7e895e7e7aa9c784208191688f9526442", Pod:"coredns-7c65d6cfc9-fkdwg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali857449cc2f9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:06:59.300167 env[1920]: 2025-09-13 00:06:59.193 [INFO][6388] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" Sep 13 00:06:59.300167 env[1920]: 2025-09-13 00:06:59.193 [INFO][6388] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" iface="eth0" netns="" Sep 13 00:06:59.300167 env[1920]: 2025-09-13 00:06:59.193 [INFO][6388] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" Sep 13 00:06:59.300167 env[1920]: 2025-09-13 00:06:59.193 [INFO][6388] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" Sep 13 00:06:59.300167 env[1920]: 2025-09-13 00:06:59.272 [INFO][6395] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" HandleID="k8s-pod-network.dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" Workload="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--fkdwg-eth0" Sep 13 00:06:59.300167 env[1920]: 2025-09-13 00:06:59.274 [INFO][6395] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:06:59.300167 env[1920]: 2025-09-13 00:06:59.274 [INFO][6395] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:06:59.300167 env[1920]: 2025-09-13 00:06:59.290 [WARNING][6395] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" HandleID="k8s-pod-network.dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" Workload="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--fkdwg-eth0" Sep 13 00:06:59.300167 env[1920]: 2025-09-13 00:06:59.290 [INFO][6395] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" HandleID="k8s-pod-network.dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" Workload="ip--172--31--29--1-k8s-coredns--7c65d6cfc9--fkdwg-eth0" Sep 13 00:06:59.300167 env[1920]: 2025-09-13 00:06:59.293 [INFO][6395] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:06:59.300167 env[1920]: 2025-09-13 00:06:59.296 [INFO][6388] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec" Sep 13 00:06:59.301864 env[1920]: time="2025-09-13T00:06:59.301798181Z" level=info msg="TearDown network for sandbox \"dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec\" successfully" Sep 13 00:06:59.312158 env[1920]: time="2025-09-13T00:06:59.312095284Z" level=info msg="RemovePodSandbox \"dd60f549f0591224ea2c4c976fd135437d8368d433c0bc6818342a9fffeed8ec\" returns successfully" Sep 13 00:06:59.313117 env[1920]: time="2025-09-13T00:06:59.313058897Z" level=info msg="StopPodSandbox for \"dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b\"" Sep 13 00:06:59.481820 env[1920]: 2025-09-13 00:06:59.400 [WARNING][6410] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--1-k8s-goldmane--7988f88666--6v2qn-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"dce9497b-6716-4542-9084-aaea79149a75", ResourceVersion:"1338", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-1", ContainerID:"5a3336a46a17f3b65ca66b0f81401f8f5c199a9311e56535c600ff6a80093d45", Pod:"goldmane-7988f88666-6v2qn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.50.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidb8036257aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:06:59.481820 env[1920]: 2025-09-13 00:06:59.401 [INFO][6410] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" Sep 13 00:06:59.481820 env[1920]: 2025-09-13 00:06:59.401 [INFO][6410] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" iface="eth0" netns="" Sep 13 00:06:59.481820 env[1920]: 2025-09-13 00:06:59.401 [INFO][6410] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" Sep 13 00:06:59.481820 env[1920]: 2025-09-13 00:06:59.401 [INFO][6410] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" Sep 13 00:06:59.481820 env[1920]: 2025-09-13 00:06:59.458 [INFO][6417] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" HandleID="k8s-pod-network.dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" Workload="ip--172--31--29--1-k8s-goldmane--7988f88666--6v2qn-eth0" Sep 13 00:06:59.481820 env[1920]: 2025-09-13 00:06:59.458 [INFO][6417] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:06:59.481820 env[1920]: 2025-09-13 00:06:59.458 [INFO][6417] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:06:59.481820 env[1920]: 2025-09-13 00:06:59.472 [WARNING][6417] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" HandleID="k8s-pod-network.dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" Workload="ip--172--31--29--1-k8s-goldmane--7988f88666--6v2qn-eth0" Sep 13 00:06:59.481820 env[1920]: 2025-09-13 00:06:59.473 [INFO][6417] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" HandleID="k8s-pod-network.dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" Workload="ip--172--31--29--1-k8s-goldmane--7988f88666--6v2qn-eth0" Sep 13 00:06:59.481820 env[1920]: 2025-09-13 00:06:59.476 [INFO][6417] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:06:59.481820 env[1920]: 2025-09-13 00:06:59.478 [INFO][6410] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" Sep 13 00:06:59.482708 env[1920]: time="2025-09-13T00:06:59.481838627Z" level=info msg="TearDown network for sandbox \"dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b\" successfully" Sep 13 00:06:59.482708 env[1920]: time="2025-09-13T00:06:59.481885225Z" level=info msg="StopPodSandbox for \"dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b\" returns successfully" Sep 13 00:06:59.482708 env[1920]: time="2025-09-13T00:06:59.482511581Z" level=info msg="RemovePodSandbox for \"dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b\"" Sep 13 00:06:59.482708 env[1920]: time="2025-09-13T00:06:59.482562798Z" level=info msg="Forcibly stopping sandbox \"dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b\"" Sep 13 00:06:59.650822 env[1920]: 2025-09-13 00:06:59.560 [WARNING][6435] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--1-k8s-goldmane--7988f88666--6v2qn-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"dce9497b-6716-4542-9084-aaea79149a75", ResourceVersion:"1338", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 5, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-1", ContainerID:"5a3336a46a17f3b65ca66b0f81401f8f5c199a9311e56535c600ff6a80093d45", Pod:"goldmane-7988f88666-6v2qn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.50.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidb8036257aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:06:59.650822 env[1920]: 2025-09-13 00:06:59.561 [INFO][6435] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" Sep 13 00:06:59.650822 env[1920]: 2025-09-13 00:06:59.561 [INFO][6435] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" iface="eth0" netns="" Sep 13 00:06:59.650822 env[1920]: 2025-09-13 00:06:59.561 [INFO][6435] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" Sep 13 00:06:59.650822 env[1920]: 2025-09-13 00:06:59.561 [INFO][6435] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" Sep 13 00:06:59.650822 env[1920]: 2025-09-13 00:06:59.625 [INFO][6442] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" HandleID="k8s-pod-network.dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" Workload="ip--172--31--29--1-k8s-goldmane--7988f88666--6v2qn-eth0" Sep 13 00:06:59.650822 env[1920]: 2025-09-13 00:06:59.626 [INFO][6442] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:06:59.650822 env[1920]: 2025-09-13 00:06:59.626 [INFO][6442] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:06:59.650822 env[1920]: 2025-09-13 00:06:59.640 [WARNING][6442] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" HandleID="k8s-pod-network.dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" Workload="ip--172--31--29--1-k8s-goldmane--7988f88666--6v2qn-eth0" Sep 13 00:06:59.650822 env[1920]: 2025-09-13 00:06:59.640 [INFO][6442] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" HandleID="k8s-pod-network.dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" Workload="ip--172--31--29--1-k8s-goldmane--7988f88666--6v2qn-eth0" Sep 13 00:06:59.650822 env[1920]: 2025-09-13 00:06:59.643 [INFO][6442] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:06:59.650822 env[1920]: 2025-09-13 00:06:59.646 [INFO][6435] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b" Sep 13 00:06:59.650822 env[1920]: time="2025-09-13T00:06:59.649513838Z" level=info msg="TearDown network for sandbox \"dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b\" successfully" Sep 13 00:06:59.658841 env[1920]: time="2025-09-13T00:06:59.656983345Z" level=info msg="RemovePodSandbox \"dec59b59eac21d7650c39c42449ab309cc6bc51e8a59f8e4df558c37affd0a1b\" returns successfully" Sep 13 00:07:03.812443 systemd[1]: Started sshd@24-172.31.29.1:22-139.178.89.65:38394.service. Sep 13 00:07:03.817824 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:07:03.817964 kernel: audit: type=1130 audit(1757722023.812:588): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.29.1:22-139.178.89.65:38394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:03.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.29.1:22-139.178.89.65:38394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:04.023447 sshd[6448]: Accepted publickey for core from 139.178.89.65 port 38394 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:04.022000 audit[6448]: USER_ACCT pid=6448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:04.034000 audit[6448]: CRED_ACQ pid=6448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:04.036613 sshd[6448]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:04.045372 kernel: audit: type=1101 audit(1757722024.022:589): pid=6448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:04.045510 kernel: audit: type=1103 audit(1757722024.034:590): pid=6448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:04.054027 kernel: audit: type=1006 audit(1757722024.035:591): pid=6448 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Sep 13 00:07:04.054180 kernel: audit: type=1300 audit(1757722024.035:591): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcc9a2260 a2=3 a3=1 items=0 ppid=1 pid=6448 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:07:04.035000 audit[6448]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcc9a2260 a2=3 a3=1 items=0 ppid=1 pid=6448 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:07:04.053288 systemd[1]: Started session-25.scope. Sep 13 00:07:04.059588 systemd-logind[1911]: New session 25 of user core. Sep 13 00:07:04.035000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:07:04.075556 kernel: audit: type=1327 audit(1757722024.035:591): proctitle=737368643A20636F7265205B707269765D Sep 13 00:07:04.090000 audit[6448]: USER_START pid=6448 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:04.093000 audit[6451]: CRED_ACQ pid=6451 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:04.112753 kernel: audit: type=1105 audit(1757722024.090:592): pid=6448 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:04.112856 kernel: audit: type=1103 audit(1757722024.093:593): pid=6451 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:04.482571 sshd[6448]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:04.484000 audit[6448]: USER_END pid=6448 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:04.500028 systemd[1]: sshd@24-172.31.29.1:22-139.178.89.65:38394.service: Deactivated successfully. Sep 13 00:07:04.500979 systemd-logind[1911]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:07:04.502852 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:07:04.506279 systemd-logind[1911]: Removed session 25. Sep 13 00:07:04.496000 audit[6448]: CRED_DISP pid=6448 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:04.529097 kernel: audit: type=1106 audit(1757722024.484:594): pid=6448 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:04.529237 kernel: audit: type=1104 audit(1757722024.496:595): pid=6448 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:04.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.29.1:22-139.178.89.65:38394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:06.965720 systemd[1]: run-containerd-runc-k8s.io-dad4fff024c4f5fee0310844c06f44418179385c69ed85739d4c8dcc2f307956-runc.643rEe.mount: Deactivated successfully. Sep 13 00:07:09.509132 systemd[1]: Started sshd@25-172.31.29.1:22-139.178.89.65:38402.service. Sep 13 00:07:09.513952 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:07:09.514123 kernel: audit: type=1130 audit(1757722029.508:597): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.29.1:22-139.178.89.65:38402 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:09.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.29.1:22-139.178.89.65:38402 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:09.690000 audit[6517]: USER_ACCT pid=6517 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:09.691441 sshd[6517]: Accepted publickey for core from 139.178.89.65 port 38402 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:09.694694 sshd[6517]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:09.690000 audit[6517]: CRED_ACQ pid=6517 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:09.712899 kernel: audit: type=1101 audit(1757722029.690:598): pid=6517 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:09.713066 kernel: audit: type=1103 audit(1757722029.690:599): pid=6517 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:09.720395 kernel: audit: type=1006 audit(1757722029.690:600): pid=6517 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Sep 13 00:07:09.690000 audit[6517]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd5048f90 a2=3 a3=1 items=0 ppid=1 pid=6517 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:07:09.735081 kernel: audit: type=1300 audit(1757722029.690:600): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd5048f90 a2=3 a3=1 items=0 ppid=1 pid=6517 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:07:09.735230 kernel: audit: type=1327 audit(1757722029.690:600): proctitle=737368643A20636F7265205B707269765D Sep 13 00:07:09.690000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:07:09.739703 systemd[1]: Started session-26.scope. Sep 13 00:07:09.740192 systemd-logind[1911]: New session 26 of user core. Sep 13 00:07:09.760000 audit[6517]: USER_START pid=6517 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:09.774000 audit[6520]: CRED_ACQ pid=6520 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:09.789473 kernel: audit: type=1105 audit(1757722029.760:601): pid=6517 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:09.789632 kernel: audit: type=1103 audit(1757722029.774:602): pid=6520 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:10.086392 sshd[6517]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:10.089000 audit[6517]: USER_END pid=6517 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:10.103303 systemd[1]: sshd@25-172.31.29.1:22-139.178.89.65:38402.service: Deactivated successfully. Sep 13 00:07:10.107469 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:07:10.108422 systemd-logind[1911]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:07:10.112300 systemd-logind[1911]: Removed session 26. Sep 13 00:07:10.089000 audit[6517]: CRED_DISP pid=6517 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:10.128687 kernel: audit: type=1106 audit(1757722030.089:603): pid=6517 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:10.128916 kernel: audit: type=1104 audit(1757722030.089:604): pid=6517 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:10.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.29.1:22-139.178.89.65:38402 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:10.382481 systemd[1]: run-containerd-runc-k8s.io-2be5625fcaee790eda909f45b0c4efacdce04c5c77d971509d82c1cfa8ae118a-runc.1zlQQY.mount: Deactivated successfully. Sep 13 00:07:13.212888 systemd[1]: run-containerd-runc-k8s.io-6806233a4bf6cf4664d483b78f05e38ca0df1f89677b8c29f388fc3191b96fea-runc.cTgr0J.mount: Deactivated successfully. Sep 13 00:07:15.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.29.1:22-139.178.89.65:53470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:15.112644 systemd[1]: Started sshd@26-172.31.29.1:22-139.178.89.65:53470.service. Sep 13 00:07:15.115198 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:07:15.115272 kernel: audit: type=1130 audit(1757722035.112:606): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.29.1:22-139.178.89.65:53470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:15.304179 sshd[6578]: Accepted publickey for core from 139.178.89.65 port 53470 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:15.303000 audit[6578]: USER_ACCT pid=6578 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:15.316624 sshd[6578]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:15.315000 audit[6578]: CRED_ACQ pid=6578 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:15.326447 kernel: audit: type=1101 audit(1757722035.303:607): pid=6578 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:15.326607 kernel: audit: type=1103 audit(1757722035.315:608): pid=6578 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:15.332727 kernel: audit: type=1006 audit(1757722035.315:609): pid=6578 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Sep 13 00:07:15.315000 audit[6578]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc7c15630 a2=3 a3=1 items=0 ppid=1 pid=6578 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:07:15.343247 kernel: audit: type=1300 audit(1757722035.315:609): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc7c15630 a2=3 a3=1 items=0 ppid=1 pid=6578 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:07:15.315000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:07:15.347391 kernel: audit: type=1327 audit(1757722035.315:609): proctitle=737368643A20636F7265205B707269765D Sep 13 00:07:15.355089 systemd-logind[1911]: New session 27 of user core. Sep 13 00:07:15.357586 systemd[1]: Started session-27.scope. Sep 13 00:07:15.377000 audit[6578]: USER_START pid=6578 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:15.393000 audit[6581]: CRED_ACQ pid=6581 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:15.407799 kernel: audit: type=1105 audit(1757722035.377:610): pid=6578 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:15.407937 kernel: audit: type=1103 audit(1757722035.393:611): pid=6581 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:15.699984 sshd[6578]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:15.701000 audit[6578]: USER_END pid=6578 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:15.715000 audit[6578]: CRED_DISP pid=6578 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:15.719058 systemd[1]: sshd@26-172.31.29.1:22-139.178.89.65:53470.service: Deactivated successfully. Sep 13 00:07:15.719859 kernel: audit: type=1106 audit(1757722035.701:612): pid=6578 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:15.721269 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 00:07:15.724143 systemd-logind[1911]: Session 27 logged out. Waiting for processes to exit. Sep 13 00:07:15.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.29.1:22-139.178.89.65:53470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:15.738813 kernel: audit: type=1104 audit(1757722035.715:613): pid=6578 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:15.739273 systemd-logind[1911]: Removed session 27. Sep 13 00:07:20.725524 systemd[1]: Started sshd@27-172.31.29.1:22-139.178.89.65:58910.service. Sep 13 00:07:20.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.29.1:22-139.178.89.65:58910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:20.729137 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 00:07:20.729242 kernel: audit: type=1130 audit(1757722040.724:615): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.29.1:22-139.178.89.65:58910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:20.909000 audit[6595]: USER_ACCT pid=6595 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:20.912087 sshd[6595]: Accepted publickey for core from 139.178.89.65 port 58910 ssh2: RSA SHA256:hZ9iVout2PrR+GbvdOVRihMPHc0rDrYOM1fRKHgWdwM Sep 13 00:07:20.920991 sshd[6595]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:20.936464 kernel: audit: type=1101 audit(1757722040.909:616): pid=6595 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:20.936634 kernel: audit: type=1103 audit(1757722040.918:617): pid=6595 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:20.918000 audit[6595]: CRED_ACQ pid=6595 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:20.950030 systemd[1]: Started session-28.scope. Sep 13 00:07:20.951937 systemd-logind[1911]: New session 28 of user core. Sep 13 00:07:20.957545 kernel: audit: type=1006 audit(1757722040.918:618): pid=6595 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Sep 13 00:07:20.918000 audit[6595]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffe141630 a2=3 a3=1 items=0 ppid=1 pid=6595 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:07:20.978945 kernel: audit: type=1300 audit(1757722040.918:618): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffe141630 a2=3 a3=1 items=0 ppid=1 pid=6595 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:07:20.984318 kernel: audit: type=1327 audit(1757722040.918:618): proctitle=737368643A20636F7265205B707269765D Sep 13 00:07:20.918000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:07:20.971000 audit[6595]: USER_START pid=6595 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:20.974000 audit[6598]: CRED_ACQ pid=6598 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:21.007076 kernel: audit: type=1105 audit(1757722040.971:619): pid=6595 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:21.007244 kernel: audit: type=1103 audit(1757722040.974:620): pid=6598 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:21.293722 sshd[6595]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:21.293000 audit[6595]: USER_END pid=6595 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:21.307347 systemd[1]: sshd@27-172.31.29.1:22-139.178.89.65:58910.service: Deactivated successfully. Sep 13 00:07:21.311186 systemd[1]: session-28.scope: Deactivated successfully. Sep 13 00:07:21.312000 systemd-logind[1911]: Session 28 logged out. Waiting for processes to exit. Sep 13 00:07:21.314523 systemd-logind[1911]: Removed session 28. Sep 13 00:07:21.302000 audit[6595]: CRED_DISP pid=6595 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:21.329679 kernel: audit: type=1106 audit(1757722041.293:621): pid=6595 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:21.329878 kernel: audit: type=1104 audit(1757722041.302:622): pid=6595 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:07:21.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.29.1:22-139.178.89.65:58910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:07:35.011068 env[1920]: time="2025-09-13T00:07:35.011000128Z" level=info msg="shim disconnected" id=fead1a70d39e98d6f30c240ba7189c1c188f9ffc5805441d39346e8af2867e22 Sep 13 00:07:35.012046 env[1920]: time="2025-09-13T00:07:35.011971929Z" level=warning msg="cleaning up after shim disconnected" id=fead1a70d39e98d6f30c240ba7189c1c188f9ffc5805441d39346e8af2867e22 namespace=k8s.io Sep 13 00:07:35.012225 env[1920]: time="2025-09-13T00:07:35.012192313Z" level=info msg="cleaning up dead shim" Sep 13 00:07:35.019284 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fead1a70d39e98d6f30c240ba7189c1c188f9ffc5805441d39346e8af2867e22-rootfs.mount: Deactivated successfully. Sep 13 00:07:35.034058 env[1920]: time="2025-09-13T00:07:35.034000597Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:07:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6644 runtime=io.containerd.runc.v2\n" Sep 13 00:07:35.795306 kubelet[3053]: I0913 00:07:35.795241 3053 scope.go:117] "RemoveContainer" containerID="fead1a70d39e98d6f30c240ba7189c1c188f9ffc5805441d39346e8af2867e22" Sep 13 00:07:35.799903 env[1920]: time="2025-09-13T00:07:35.799847584Z" level=info msg="CreateContainer within sandbox \"322fda4b3f8c7355d318695cfce14d8cf5cb2e3ee272b63e6a672490468720ea\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 13 00:07:35.832811 env[1920]: time="2025-09-13T00:07:35.831024102Z" level=info msg="CreateContainer within sandbox \"322fda4b3f8c7355d318695cfce14d8cf5cb2e3ee272b63e6a672490468720ea\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e46ff16a1bbc552dd32a49a15a686b5113f7cd1ac6b770291801c934556fbc65\"" Sep 13 00:07:35.833302 env[1920]: time="2025-09-13T00:07:35.833247562Z" level=info msg="StartContainer for \"e46ff16a1bbc552dd32a49a15a686b5113f7cd1ac6b770291801c934556fbc65\"" Sep 13 00:07:35.991358 env[1920]: time="2025-09-13T00:07:35.991290721Z" level=info msg="StartContainer for \"e46ff16a1bbc552dd32a49a15a686b5113f7cd1ac6b770291801c934556fbc65\" returns successfully" Sep 13 00:07:36.104692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-181471d92a63981385fbce8434a3c02be7a0950ee77b60083640be9fc54ef304-rootfs.mount: Deactivated successfully. Sep 13 00:07:36.107884 env[1920]: time="2025-09-13T00:07:36.107745288Z" level=info msg="shim disconnected" id=181471d92a63981385fbce8434a3c02be7a0950ee77b60083640be9fc54ef304 Sep 13 00:07:36.107884 env[1920]: time="2025-09-13T00:07:36.107873115Z" level=warning msg="cleaning up after shim disconnected" id=181471d92a63981385fbce8434a3c02be7a0950ee77b60083640be9fc54ef304 namespace=k8s.io Sep 13 00:07:36.108533 env[1920]: time="2025-09-13T00:07:36.107895519Z" level=info msg="cleaning up dead shim" Sep 13 00:07:36.140808 env[1920]: time="2025-09-13T00:07:36.139921385Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:07:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6707 runtime=io.containerd.runc.v2\n" Sep 13 00:07:36.808936 kubelet[3053]: I0913 00:07:36.806573 3053 scope.go:117] "RemoveContainer" containerID="181471d92a63981385fbce8434a3c02be7a0950ee77b60083640be9fc54ef304" Sep 13 00:07:36.833143 env[1920]: time="2025-09-13T00:07:36.830119583Z" level=info msg="CreateContainer within sandbox \"0bd6613285972bb746fe4d767d8bd8cc3bf42bdb707f8be76b1ab43df025451d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 13 00:07:36.873167 env[1920]: time="2025-09-13T00:07:36.873101587Z" level=info msg="CreateContainer within sandbox \"0bd6613285972bb746fe4d767d8bd8cc3bf42bdb707f8be76b1ab43df025451d\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"d9e98fafdac82d1b3ef27cbd4d2cf7ca79a754da086c433ff02e26f7a5e07f8b\"" Sep 13 00:07:36.874185 env[1920]: time="2025-09-13T00:07:36.874142426Z" level=info msg="StartContainer for \"d9e98fafdac82d1b3ef27cbd4d2cf7ca79a754da086c433ff02e26f7a5e07f8b\"" Sep 13 00:07:37.106163 systemd[1]: run-containerd-runc-k8s.io-6806233a4bf6cf4664d483b78f05e38ca0df1f89677b8c29f388fc3191b96fea-runc.aWqNZp.mount: Deactivated successfully. Sep 13 00:07:37.226142 env[1920]: time="2025-09-13T00:07:37.226075084Z" level=info msg="StartContainer for \"d9e98fafdac82d1b3ef27cbd4d2cf7ca79a754da086c433ff02e26f7a5e07f8b\" returns successfully" Sep 13 00:07:37.843850 kubelet[3053]: E0913 00:07:37.843788 3053 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-1?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 13 00:07:40.750300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb255b19651f3fd8ef9c39b58e77371ce84c2ab4977f123c1583eca706359088-rootfs.mount: Deactivated successfully. Sep 13 00:07:40.754194 env[1920]: time="2025-09-13T00:07:40.754051259Z" level=info msg="shim disconnected" id=fb255b19651f3fd8ef9c39b58e77371ce84c2ab4977f123c1583eca706359088 Sep 13 00:07:40.754194 env[1920]: time="2025-09-13T00:07:40.754124353Z" level=warning msg="cleaning up after shim disconnected" id=fb255b19651f3fd8ef9c39b58e77371ce84c2ab4977f123c1583eca706359088 namespace=k8s.io Sep 13 00:07:40.754194 env[1920]: time="2025-09-13T00:07:40.754145761Z" level=info msg="cleaning up dead shim" Sep 13 00:07:40.772350 env[1920]: time="2025-09-13T00:07:40.772282821Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:07:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6825 runtime=io.containerd.runc.v2\n" Sep 13 00:07:40.866483 kubelet[3053]: I0913 00:07:40.866144 3053 scope.go:117] "RemoveContainer" containerID="fb255b19651f3fd8ef9c39b58e77371ce84c2ab4977f123c1583eca706359088" Sep 13 00:07:40.870932 env[1920]: time="2025-09-13T00:07:40.870870442Z" level=info msg="CreateContainer within sandbox \"e60d159f642436a695b3af8bc512ddca94cfa361dc6929aee00fca3f118e0752\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 13 00:07:40.901068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2380740365.mount: Deactivated successfully. Sep 13 00:07:40.912886 env[1920]: time="2025-09-13T00:07:40.912737465Z" level=info msg="CreateContainer within sandbox \"e60d159f642436a695b3af8bc512ddca94cfa361dc6929aee00fca3f118e0752\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"49ea7f8e19097ccf9315ba53a2268330b1c9bb72adea9c4bd3b70f1570401c05\"" Sep 13 00:07:40.914117 env[1920]: time="2025-09-13T00:07:40.914034244Z" level=info msg="StartContainer for \"49ea7f8e19097ccf9315ba53a2268330b1c9bb72adea9c4bd3b70f1570401c05\"" Sep 13 00:07:41.067853 env[1920]: time="2025-09-13T00:07:41.067751900Z" level=info msg="StartContainer for \"49ea7f8e19097ccf9315ba53a2268330b1c9bb72adea9c4bd3b70f1570401c05\" returns successfully"