Jul 12 00:35:48.739068 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 12 00:35:48.739089 kernel: Linux version 5.15.186-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Jul 11 23:15:18 -00 2025 Jul 12 00:35:48.739096 kernel: efi: EFI v2.70 by EDK II Jul 12 00:35:48.739102 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Jul 12 00:35:48.739107 kernel: random: crng init done Jul 12 00:35:48.739112 kernel: ACPI: Early table checksum verification disabled Jul 12 00:35:48.739119 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Jul 12 00:35:48.739126 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 12 00:35:48.739131 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:35:48.739137 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:35:48.739142 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:35:48.739148 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:35:48.739153 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:35:48.739158 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:35:48.739166 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:35:48.739172 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:35:48.739178 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:35:48.739184 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 12 00:35:48.739189 kernel: NUMA: Failed to initialise from firmware Jul 12 00:35:48.739195 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:35:48.739201 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Jul 12 00:35:48.739207 kernel: Zone ranges: Jul 12 00:35:48.739212 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:35:48.739219 kernel: DMA32 empty Jul 12 00:35:48.739225 kernel: Normal empty Jul 12 00:35:48.739231 kernel: Movable zone start for each node Jul 12 00:35:48.739236 kernel: Early memory node ranges Jul 12 00:35:48.739242 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Jul 12 00:35:48.739248 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Jul 12 00:35:48.739254 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Jul 12 00:35:48.739259 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Jul 12 00:35:48.739265 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Jul 12 00:35:48.739271 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Jul 12 00:35:48.739276 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Jul 12 00:35:48.739283 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:35:48.739289 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 12 00:35:48.739295 kernel: psci: probing for conduit method from ACPI. Jul 12 00:35:48.739301 kernel: psci: PSCIv1.1 detected in firmware. Jul 12 00:35:48.739307 kernel: psci: Using standard PSCI v0.2 function IDs Jul 12 00:35:48.739313 kernel: psci: Trusted OS migration not required Jul 12 00:35:48.739321 kernel: psci: SMC Calling Convention v1.1 Jul 12 00:35:48.739327 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 12 00:35:48.739334 kernel: ACPI: SRAT not present Jul 12 00:35:48.739341 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Jul 12 00:35:48.739347 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Jul 12 00:35:48.739353 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 12 00:35:48.739359 kernel: Detected PIPT I-cache on CPU0 Jul 12 00:35:48.739366 kernel: CPU features: detected: GIC system register CPU interface Jul 12 00:35:48.739372 kernel: CPU features: detected: Hardware dirty bit management Jul 12 00:35:48.739378 kernel: CPU features: detected: Spectre-v4 Jul 12 00:35:48.739440 kernel: CPU features: detected: Spectre-BHB Jul 12 00:35:48.739448 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 12 00:35:48.739454 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 12 00:35:48.739460 kernel: CPU features: detected: ARM erratum 1418040 Jul 12 00:35:48.739467 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 12 00:35:48.739473 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 12 00:35:48.739479 kernel: Policy zone: DMA Jul 12 00:35:48.739486 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6cb548cec1e3020e9c3dcbc1d7670f4d8bdc2e3c8e062898ccaed7fc9d588f65 Jul 12 00:35:48.739493 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:35:48.739499 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:35:48.739506 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:35:48.739512 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:35:48.739520 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7588K rodata, 36416K init, 777K bss, 114948K reserved, 0K cma-reserved) Jul 12 00:35:48.739526 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 12 00:35:48.739532 kernel: trace event string verifier disabled Jul 12 00:35:48.739538 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:35:48.739545 kernel: rcu: RCU event tracing is enabled. Jul 12 00:35:48.739551 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 12 00:35:48.739558 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:35:48.739564 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:35:48.739570 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:35:48.739576 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 12 00:35:48.739583 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 12 00:35:48.739591 kernel: GICv3: 256 SPIs implemented Jul 12 00:35:48.739597 kernel: GICv3: 0 Extended SPIs implemented Jul 12 00:35:48.739603 kernel: GICv3: Distributor has no Range Selector support Jul 12 00:35:48.739609 kernel: Root IRQ handler: gic_handle_irq Jul 12 00:35:48.739615 kernel: GICv3: 16 PPIs implemented Jul 12 00:35:48.739621 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 12 00:35:48.739628 kernel: ACPI: SRAT not present Jul 12 00:35:48.739634 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 12 00:35:48.739640 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Jul 12 00:35:48.739646 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Jul 12 00:35:48.739653 kernel: GICv3: using LPI property table @0x00000000400d0000 Jul 12 00:35:48.739659 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Jul 12 00:35:48.739666 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:35:48.739672 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 12 00:35:48.739679 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 12 00:35:48.739685 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 12 00:35:48.739692 kernel: arm-pv: using stolen time PV Jul 12 00:35:48.739698 kernel: Console: colour dummy device 80x25 Jul 12 00:35:48.739704 kernel: ACPI: Core revision 20210730 Jul 12 00:35:48.739711 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 12 00:35:48.739717 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:35:48.739724 kernel: LSM: Security Framework initializing Jul 12 00:35:48.739731 kernel: SELinux: Initializing. Jul 12 00:35:48.739738 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:35:48.739744 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:35:48.739751 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:35:48.739757 kernel: Platform MSI: ITS@0x8080000 domain created Jul 12 00:35:48.739763 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 12 00:35:48.739770 kernel: Remapping and enabling EFI services. Jul 12 00:35:48.739776 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:35:48.739782 kernel: Detected PIPT I-cache on CPU1 Jul 12 00:35:48.739789 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 12 00:35:48.739796 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Jul 12 00:35:48.739802 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:35:48.739809 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 12 00:35:48.739815 kernel: Detected PIPT I-cache on CPU2 Jul 12 00:35:48.739822 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 12 00:35:48.739829 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Jul 12 00:35:48.739835 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:35:48.739841 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 12 00:35:48.739847 kernel: Detected PIPT I-cache on CPU3 Jul 12 00:35:48.739855 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 12 00:35:48.739861 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Jul 12 00:35:48.739868 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:35:48.739874 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 12 00:35:48.739885 kernel: smp: Brought up 1 node, 4 CPUs Jul 12 00:35:48.739894 kernel: SMP: Total of 4 processors activated. Jul 12 00:35:48.739900 kernel: CPU features: detected: 32-bit EL0 Support Jul 12 00:35:48.739907 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 12 00:35:48.739914 kernel: CPU features: detected: Common not Private translations Jul 12 00:35:48.739920 kernel: CPU features: detected: CRC32 instructions Jul 12 00:35:48.739927 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 12 00:35:48.739934 kernel: CPU features: detected: LSE atomic instructions Jul 12 00:35:48.739942 kernel: CPU features: detected: Privileged Access Never Jul 12 00:35:48.739948 kernel: CPU features: detected: RAS Extension Support Jul 12 00:35:48.739955 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 12 00:35:48.739962 kernel: CPU: All CPU(s) started at EL1 Jul 12 00:35:48.739968 kernel: alternatives: patching kernel code Jul 12 00:35:48.739976 kernel: devtmpfs: initialized Jul 12 00:35:48.739983 kernel: KASLR enabled Jul 12 00:35:48.739990 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:35:48.739996 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 12 00:35:48.740003 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:35:48.740009 kernel: SMBIOS 3.0.0 present. Jul 12 00:35:48.740016 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Jul 12 00:35:48.740023 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:35:48.740029 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 12 00:35:48.740037 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 12 00:35:48.740044 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 12 00:35:48.740051 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:35:48.740057 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 Jul 12 00:35:48.740064 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:35:48.740070 kernel: cpuidle: using governor menu Jul 12 00:35:48.740077 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 12 00:35:48.740084 kernel: ASID allocator initialised with 32768 entries Jul 12 00:35:48.740091 kernel: ACPI: bus type PCI registered Jul 12 00:35:48.740099 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:35:48.740105 kernel: Serial: AMBA PL011 UART driver Jul 12 00:35:48.740112 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:35:48.740119 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 12 00:35:48.740125 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:35:48.740132 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 12 00:35:48.740139 kernel: cryptd: max_cpu_qlen set to 1000 Jul 12 00:35:48.740145 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 12 00:35:48.740155 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:35:48.740163 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:35:48.740170 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:35:48.740176 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 12 00:35:48.740183 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 12 00:35:48.740190 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 12 00:35:48.740198 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:35:48.740205 kernel: ACPI: Interpreter enabled Jul 12 00:35:48.740212 kernel: ACPI: Using GIC for interrupt routing Jul 12 00:35:48.740219 kernel: ACPI: MCFG table detected, 1 entries Jul 12 00:35:48.740227 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 12 00:35:48.740234 kernel: printk: console [ttyAMA0] enabled Jul 12 00:35:48.740240 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 12 00:35:48.740364 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 12 00:35:48.740469 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 12 00:35:48.740532 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 12 00:35:48.740597 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 12 00:35:48.740683 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 12 00:35:48.740693 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 12 00:35:48.740700 kernel: PCI host bridge to bus 0000:00 Jul 12 00:35:48.740776 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 12 00:35:48.740872 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 12 00:35:48.740945 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 12 00:35:48.740999 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 12 00:35:48.741076 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 12 00:35:48.741150 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 12 00:35:48.741213 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 12 00:35:48.741275 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 12 00:35:48.741335 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 12 00:35:48.741431 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 12 00:35:48.741496 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 12 00:35:48.741559 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 12 00:35:48.741612 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 12 00:35:48.741666 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 12 00:35:48.741726 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 12 00:35:48.741736 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 12 00:35:48.741743 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 12 00:35:48.741749 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 12 00:35:48.741761 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 12 00:35:48.741768 kernel: iommu: Default domain type: Translated Jul 12 00:35:48.741776 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 12 00:35:48.741782 kernel: vgaarb: loaded Jul 12 00:35:48.741789 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 12 00:35:48.741796 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 12 00:35:48.741803 kernel: PTP clock support registered Jul 12 00:35:48.741809 kernel: Registered efivars operations Jul 12 00:35:48.741816 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 12 00:35:48.741822 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:35:48.741831 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:35:48.741837 kernel: pnp: PnP ACPI init Jul 12 00:35:48.741903 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 12 00:35:48.741913 kernel: pnp: PnP ACPI: found 1 devices Jul 12 00:35:48.741919 kernel: NET: Registered PF_INET protocol family Jul 12 00:35:48.741926 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:35:48.741933 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:35:48.741940 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:35:48.741948 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:35:48.741956 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 12 00:35:48.741964 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:35:48.741979 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:35:48.741986 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:35:48.741992 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:35:48.741999 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:35:48.742008 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 12 00:35:48.742015 kernel: kvm [1]: HYP mode not available Jul 12 00:35:48.742022 kernel: Initialise system trusted keyrings Jul 12 00:35:48.742029 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:35:48.742036 kernel: Key type asymmetric registered Jul 12 00:35:48.742042 kernel: Asymmetric key parser 'x509' registered Jul 12 00:35:48.742049 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 12 00:35:48.742056 kernel: io scheduler mq-deadline registered Jul 12 00:35:48.742062 kernel: io scheduler kyber registered Jul 12 00:35:48.742069 kernel: io scheduler bfq registered Jul 12 00:35:48.742076 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 12 00:35:48.742084 kernel: ACPI: button: Power Button [PWRB] Jul 12 00:35:48.742091 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 12 00:35:48.742152 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 12 00:35:48.742161 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:35:48.742168 kernel: thunder_xcv, ver 1.0 Jul 12 00:35:48.742174 kernel: thunder_bgx, ver 1.0 Jul 12 00:35:48.742181 kernel: nicpf, ver 1.0 Jul 12 00:35:48.742188 kernel: nicvf, ver 1.0 Jul 12 00:35:48.742252 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 12 00:35:48.742310 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-12T00:35:48 UTC (1752280548) Jul 12 00:35:48.742320 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 12 00:35:48.742326 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:35:48.742333 kernel: Segment Routing with IPv6 Jul 12 00:35:48.742339 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:35:48.742346 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:35:48.742353 kernel: Key type dns_resolver registered Jul 12 00:35:48.742360 kernel: registered taskstats version 1 Jul 12 00:35:48.742368 kernel: Loading compiled-in X.509 certificates Jul 12 00:35:48.742375 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.186-flatcar: de2ee1d04443f96c763927c453375bbe23b5752a' Jul 12 00:35:48.742390 kernel: Key type .fscrypt registered Jul 12 00:35:48.742397 kernel: Key type fscrypt-provisioning registered Jul 12 00:35:48.742404 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:35:48.742410 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:35:48.742423 kernel: ima: No architecture policies found Jul 12 00:35:48.742430 kernel: clk: Disabling unused clocks Jul 12 00:35:48.742437 kernel: Freeing unused kernel memory: 36416K Jul 12 00:35:48.742445 kernel: Run /init as init process Jul 12 00:35:48.742452 kernel: with arguments: Jul 12 00:35:48.742458 kernel: /init Jul 12 00:35:48.742465 kernel: with environment: Jul 12 00:35:48.742471 kernel: HOME=/ Jul 12 00:35:48.742478 kernel: TERM=linux Jul 12 00:35:48.742484 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:35:48.742493 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 12 00:35:48.742503 systemd[1]: Detected virtualization kvm. Jul 12 00:35:48.742510 systemd[1]: Detected architecture arm64. Jul 12 00:35:48.742517 systemd[1]: Running in initrd. Jul 12 00:35:48.742524 systemd[1]: No hostname configured, using default hostname. Jul 12 00:35:48.742531 systemd[1]: Hostname set to . Jul 12 00:35:48.742539 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:35:48.742546 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:35:48.742553 systemd[1]: Started systemd-ask-password-console.path. Jul 12 00:35:48.742562 systemd[1]: Reached target cryptsetup.target. Jul 12 00:35:48.742569 systemd[1]: Reached target paths.target. Jul 12 00:35:48.742576 systemd[1]: Reached target slices.target. Jul 12 00:35:48.742586 systemd[1]: Reached target swap.target. Jul 12 00:35:48.742594 systemd[1]: Reached target timers.target. Jul 12 00:35:48.742602 systemd[1]: Listening on iscsid.socket. Jul 12 00:35:48.742609 systemd[1]: Listening on iscsiuio.socket. Jul 12 00:35:48.742620 systemd[1]: Listening on systemd-journald-audit.socket. Jul 12 00:35:48.742627 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 12 00:35:48.742635 systemd[1]: Listening on systemd-journald.socket. Jul 12 00:35:48.742642 systemd[1]: Listening on systemd-networkd.socket. Jul 12 00:35:48.742651 systemd[1]: Listening on systemd-udevd-control.socket. Jul 12 00:35:48.742659 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 12 00:35:48.742666 systemd[1]: Reached target sockets.target. Jul 12 00:35:48.742673 systemd[1]: Starting kmod-static-nodes.service... Jul 12 00:35:48.745220 systemd[1]: Finished network-cleanup.service. Jul 12 00:35:48.745235 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:35:48.745243 systemd[1]: Starting systemd-journald.service... Jul 12 00:35:48.745250 systemd[1]: Starting systemd-modules-load.service... Jul 12 00:35:48.745257 systemd[1]: Starting systemd-resolved.service... Jul 12 00:35:48.745264 systemd[1]: Starting systemd-vconsole-setup.service... Jul 12 00:35:48.745271 systemd[1]: Finished kmod-static-nodes.service. Jul 12 00:35:48.745279 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:35:48.745286 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 12 00:35:48.745293 systemd[1]: Finished systemd-vconsole-setup.service. Jul 12 00:35:48.745301 systemd[1]: Starting dracut-cmdline-ask.service... Jul 12 00:35:48.745308 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 12 00:35:48.745317 kernel: audit: type=1130 audit(1752280548.742:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:48.745327 systemd-journald[290]: Journal started Jul 12 00:35:48.745407 systemd-journald[290]: Runtime Journal (/run/log/journal/a9bf8af831a54e6c8a0564c722d8c41d) is 6.0M, max 48.7M, 42.6M free. Jul 12 00:35:48.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:48.736053 systemd-modules-load[291]: Inserted module 'overlay' Jul 12 00:35:48.747100 systemd[1]: Started systemd-journald.service. Jul 12 00:35:48.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:48.750399 kernel: audit: type=1130 audit(1752280548.747:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:48.762416 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:35:48.763540 systemd-resolved[292]: Positive Trust Anchors: Jul 12 00:35:48.763555 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:35:48.763582 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 12 00:35:48.770371 systemd-resolved[292]: Defaulting to hostname 'linux'. Jul 12 00:35:48.772445 kernel: Bridge firewalling registered Jul 12 00:35:48.771407 systemd[1]: Started systemd-resolved.service. Jul 12 00:35:48.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:48.772482 systemd-modules-load[291]: Inserted module 'br_netfilter' Jul 12 00:35:48.777911 kernel: audit: type=1130 audit(1752280548.772:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:48.773239 systemd[1]: Reached target nss-lookup.target. Jul 12 00:35:48.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:48.777263 systemd[1]: Finished dracut-cmdline-ask.service. Jul 12 00:35:48.783824 kernel: audit: type=1130 audit(1752280548.778:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:48.779587 systemd[1]: Starting dracut-cmdline.service... Jul 12 00:35:48.788417 kernel: SCSI subsystem initialized Jul 12 00:35:48.790437 dracut-cmdline[308]: dracut-dracut-053 Jul 12 00:35:48.794008 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6cb548cec1e3020e9c3dcbc1d7670f4d8bdc2e3c8e062898ccaed7fc9d588f65 Jul 12 00:35:48.800445 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:35:48.800473 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:35:48.801581 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 12 00:35:48.803922 systemd-modules-load[291]: Inserted module 'dm_multipath' Jul 12 00:35:48.805018 systemd[1]: Finished systemd-modules-load.service. Jul 12 00:35:48.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:48.806794 systemd[1]: Starting systemd-sysctl.service... Jul 12 00:35:48.810651 kernel: audit: type=1130 audit(1752280548.805:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:48.816219 systemd[1]: Finished systemd-sysctl.service. Jul 12 00:35:48.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:48.820581 kernel: audit: type=1130 audit(1752280548.816:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:48.859418 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:35:48.872423 kernel: iscsi: registered transport (tcp) Jul 12 00:35:48.887422 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:35:48.887454 kernel: QLogic iSCSI HBA Driver Jul 12 00:35:48.920534 systemd[1]: Finished dracut-cmdline.service. Jul 12 00:35:48.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:48.924462 kernel: audit: type=1130 audit(1752280548.920:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:48.922225 systemd[1]: Starting dracut-pre-udev.service... Jul 12 00:35:48.965435 kernel: raid6: neonx8 gen() 13760 MB/s Jul 12 00:35:48.982409 kernel: raid6: neonx8 xor() 10781 MB/s Jul 12 00:35:48.999408 kernel: raid6: neonx4 gen() 13480 MB/s Jul 12 00:35:49.016407 kernel: raid6: neonx4 xor() 11226 MB/s Jul 12 00:35:49.033408 kernel: raid6: neonx2 gen() 12928 MB/s Jul 12 00:35:49.050405 kernel: raid6: neonx2 xor() 10423 MB/s Jul 12 00:35:49.067406 kernel: raid6: neonx1 gen() 10564 MB/s Jul 12 00:35:49.084406 kernel: raid6: neonx1 xor() 8748 MB/s Jul 12 00:35:49.101405 kernel: raid6: int64x8 gen() 6263 MB/s Jul 12 00:35:49.118400 kernel: raid6: int64x8 xor() 3539 MB/s Jul 12 00:35:49.135409 kernel: raid6: int64x4 gen() 7132 MB/s Jul 12 00:35:49.152426 kernel: raid6: int64x4 xor() 3850 MB/s Jul 12 00:35:49.169408 kernel: raid6: int64x2 gen() 6142 MB/s Jul 12 00:35:49.186409 kernel: raid6: int64x2 xor() 3312 MB/s Jul 12 00:35:49.203412 kernel: raid6: int64x1 gen() 5036 MB/s Jul 12 00:35:49.220553 kernel: raid6: int64x1 xor() 2641 MB/s Jul 12 00:35:49.220574 kernel: raid6: using algorithm neonx8 gen() 13760 MB/s Jul 12 00:35:49.220583 kernel: raid6: .... xor() 10781 MB/s, rmw enabled Jul 12 00:35:49.221690 kernel: raid6: using neon recovery algorithm Jul 12 00:35:49.232682 kernel: xor: measuring software checksum speed Jul 12 00:35:49.232707 kernel: 8regs : 17235 MB/sec Jul 12 00:35:49.233406 kernel: 32regs : 20707 MB/sec Jul 12 00:35:49.234682 kernel: arm64_neon : 23059 MB/sec Jul 12 00:35:49.234693 kernel: xor: using function: arm64_neon (23059 MB/sec) Jul 12 00:35:49.292439 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 12 00:35:49.302952 systemd[1]: Finished dracut-pre-udev.service. Jul 12 00:35:49.307907 kernel: audit: type=1130 audit(1752280549.303:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:49.307931 kernel: audit: type=1334 audit(1752280549.306:10): prog-id=7 op=LOAD Jul 12 00:35:49.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:49.306000 audit: BPF prog-id=7 op=LOAD Jul 12 00:35:49.307000 audit: BPF prog-id=8 op=LOAD Jul 12 00:35:49.308336 systemd[1]: Starting systemd-udevd.service... Jul 12 00:35:49.320855 systemd-udevd[493]: Using default interface naming scheme 'v252'. Jul 12 00:35:49.324170 systemd[1]: Started systemd-udevd.service. Jul 12 00:35:49.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:49.326746 systemd[1]: Starting dracut-pre-trigger.service... Jul 12 00:35:49.338156 dracut-pre-trigger[502]: rd.md=0: removing MD RAID activation Jul 12 00:35:49.365689 systemd[1]: Finished dracut-pre-trigger.service. Jul 12 00:35:49.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:49.367239 systemd[1]: Starting systemd-udev-trigger.service... Jul 12 00:35:49.399689 systemd[1]: Finished systemd-udev-trigger.service. Jul 12 00:35:49.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:49.426076 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 12 00:35:49.431038 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 12 00:35:49.431053 kernel: GPT:9289727 != 19775487 Jul 12 00:35:49.431062 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 12 00:35:49.431075 kernel: GPT:9289727 != 19775487 Jul 12 00:35:49.431083 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 12 00:35:49.431091 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:35:49.445302 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (543) Jul 12 00:35:49.444270 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 12 00:35:49.445462 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 12 00:35:49.451921 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 12 00:35:49.457052 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 12 00:35:49.460525 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 12 00:35:49.462892 systemd[1]: Starting disk-uuid.service... Jul 12 00:35:49.468803 disk-uuid[563]: Primary Header is updated. Jul 12 00:35:49.468803 disk-uuid[563]: Secondary Entries is updated. Jul 12 00:35:49.468803 disk-uuid[563]: Secondary Header is updated. Jul 12 00:35:49.472415 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:35:49.479419 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:35:49.482406 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:35:50.483966 disk-uuid[564]: The operation has completed successfully. Jul 12 00:35:50.485042 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:35:50.508915 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:35:50.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:50.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:50.509008 systemd[1]: Finished disk-uuid.service. Jul 12 00:35:50.510650 systemd[1]: Starting verity-setup.service... Jul 12 00:35:50.525415 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 12 00:35:50.546858 systemd[1]: Found device dev-mapper-usr.device. Jul 12 00:35:50.549138 systemd[1]: Mounting sysusr-usr.mount... Jul 12 00:35:50.552150 systemd[1]: Finished verity-setup.service. Jul 12 00:35:50.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:50.599400 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 12 00:35:50.599373 systemd[1]: Mounted sysusr-usr.mount. Jul 12 00:35:50.600227 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 12 00:35:50.600969 systemd[1]: Starting ignition-setup.service... Jul 12 00:35:50.603207 systemd[1]: Starting parse-ip-for-networkd.service... Jul 12 00:35:50.610799 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:35:50.611345 kernel: BTRFS info (device vda6): using free space tree Jul 12 00:35:50.611362 kernel: BTRFS info (device vda6): has skinny extents Jul 12 00:35:50.618785 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 12 00:35:50.626226 systemd[1]: Finished ignition-setup.service. Jul 12 00:35:50.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:50.627923 systemd[1]: Starting ignition-fetch-offline.service... Jul 12 00:35:50.683033 systemd[1]: Finished parse-ip-for-networkd.service. Jul 12 00:35:50.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:50.684000 audit: BPF prog-id=9 op=LOAD Jul 12 00:35:50.685490 systemd[1]: Starting systemd-networkd.service... Jul 12 00:35:50.715629 systemd-networkd[740]: lo: Link UP Jul 12 00:35:50.715642 systemd-networkd[740]: lo: Gained carrier Jul 12 00:35:50.716031 systemd-networkd[740]: Enumeration completed Jul 12 00:35:50.716322 systemd-networkd[740]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:35:50.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:50.716451 systemd[1]: Started systemd-networkd.service. Jul 12 00:35:50.717434 systemd-networkd[740]: eth0: Link UP Jul 12 00:35:50.717437 systemd-networkd[740]: eth0: Gained carrier Jul 12 00:35:50.717670 systemd[1]: Reached target network.target. Jul 12 00:35:50.719687 systemd[1]: Starting iscsiuio.service... Jul 12 00:35:50.731585 systemd[1]: Started iscsiuio.service. Jul 12 00:35:50.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:50.733086 systemd[1]: Starting iscsid.service... Jul 12 00:35:50.734101 ignition[658]: Ignition 2.14.0 Jul 12 00:35:50.734108 ignition[658]: Stage: fetch-offline Jul 12 00:35:50.734145 ignition[658]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:35:50.738465 iscsid[745]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 12 00:35:50.738465 iscsid[745]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 12 00:35:50.738465 iscsid[745]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 12 00:35:50.738465 iscsid[745]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 12 00:35:50.738465 iscsid[745]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 12 00:35:50.738465 iscsid[745]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 12 00:35:50.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:50.734154 ignition[658]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:35:50.740775 systemd[1]: Started iscsid.service. Jul 12 00:35:50.734277 ignition[658]: parsed url from cmdline: "" Jul 12 00:35:50.745460 systemd-networkd[740]: eth0: DHCPv4 address 10.0.0.111/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:35:50.734280 ignition[658]: no config URL provided Jul 12 00:35:50.746760 systemd[1]: Starting dracut-initqueue.service... Jul 12 00:35:50.734285 ignition[658]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:35:50.734292 ignition[658]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:35:50.734309 ignition[658]: op(1): [started] loading QEMU firmware config module Jul 12 00:35:50.734313 ignition[658]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 12 00:35:50.760233 systemd[1]: Finished dracut-initqueue.service. Jul 12 00:35:50.762632 systemd[1]: Reached target remote-fs-pre.target. Jul 12 00:35:50.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:50.761075 ignition[658]: op(1): [finished] loading QEMU firmware config module Jul 12 00:35:50.764005 systemd[1]: Reached target remote-cryptsetup.target. Jul 12 00:35:50.765558 systemd[1]: Reached target remote-fs.target. Jul 12 00:35:50.767973 systemd[1]: Starting dracut-pre-mount.service... Jul 12 00:35:50.775442 systemd[1]: Finished dracut-pre-mount.service. Jul 12 00:35:50.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:50.805885 ignition[658]: parsing config with SHA512: 3b0016aae029dc7df10712bab6736f163055d33e34e5ccd8bd0c87d2ee6c572841f9dbc290424167875f201ba4b8cf003976fe2cb98ff13ba26655baf811f31c Jul 12 00:35:50.816894 unknown[658]: fetched base config from "system" Jul 12 00:35:50.816905 unknown[658]: fetched user config from "qemu" Jul 12 00:35:50.817529 ignition[658]: fetch-offline: fetch-offline passed Jul 12 00:35:50.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:50.818900 systemd[1]: Finished ignition-fetch-offline.service. Jul 12 00:35:50.817585 ignition[658]: Ignition finished successfully Jul 12 00:35:50.820365 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 12 00:35:50.821244 systemd[1]: Starting ignition-kargs.service... Jul 12 00:35:50.829617 ignition[761]: Ignition 2.14.0 Jul 12 00:35:50.829626 ignition[761]: Stage: kargs Jul 12 00:35:50.829713 ignition[761]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:35:50.829722 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:35:50.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:50.831785 systemd[1]: Finished ignition-kargs.service. Jul 12 00:35:50.830633 ignition[761]: kargs: kargs passed Jul 12 00:35:50.833934 systemd[1]: Starting ignition-disks.service... Jul 12 00:35:50.830672 ignition[761]: Ignition finished successfully Jul 12 00:35:50.840288 ignition[768]: Ignition 2.14.0 Jul 12 00:35:50.840297 ignition[768]: Stage: disks Jul 12 00:35:50.840400 ignition[768]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:35:50.840419 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:35:50.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:50.841985 systemd[1]: Finished ignition-disks.service. Jul 12 00:35:50.841317 ignition[768]: disks: disks passed Jul 12 00:35:50.843227 systemd[1]: Reached target initrd-root-device.target. Jul 12 00:35:50.841358 ignition[768]: Ignition finished successfully Jul 12 00:35:50.844807 systemd[1]: Reached target local-fs-pre.target. Jul 12 00:35:50.846149 systemd[1]: Reached target local-fs.target. Jul 12 00:35:50.847343 systemd[1]: Reached target sysinit.target. Jul 12 00:35:50.848723 systemd[1]: Reached target basic.target. Jul 12 00:35:50.850850 systemd[1]: Starting systemd-fsck-root.service... Jul 12 00:35:50.861035 systemd-fsck[776]: ROOT: clean, 619/553520 files, 56022/553472 blocks Jul 12 00:35:50.865433 systemd[1]: Finished systemd-fsck-root.service. Jul 12 00:35:50.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:50.866942 systemd[1]: Mounting sysroot.mount... Jul 12 00:35:50.872401 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 12 00:35:50.873017 systemd[1]: Mounted sysroot.mount. Jul 12 00:35:50.873772 systemd[1]: Reached target initrd-root-fs.target. Jul 12 00:35:50.876562 systemd[1]: Mounting sysroot-usr.mount... Jul 12 00:35:50.877448 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 12 00:35:50.877487 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:35:50.877507 systemd[1]: Reached target ignition-diskful.target. Jul 12 00:35:50.879249 systemd[1]: Mounted sysroot-usr.mount. Jul 12 00:35:50.881066 systemd[1]: Starting initrd-setup-root.service... Jul 12 00:35:50.885175 initrd-setup-root[786]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:35:50.888834 initrd-setup-root[794]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:35:50.892889 initrd-setup-root[802]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:35:50.896259 initrd-setup-root[810]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:35:50.920927 systemd[1]: Finished initrd-setup-root.service. Jul 12 00:35:50.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:50.922546 systemd[1]: Starting ignition-mount.service... Jul 12 00:35:50.923848 systemd[1]: Starting sysroot-boot.service... Jul 12 00:35:50.928100 bash[827]: umount: /sysroot/usr/share/oem: not mounted. Jul 12 00:35:50.937097 ignition[828]: INFO : Ignition 2.14.0 Jul 12 00:35:50.937097 ignition[828]: INFO : Stage: mount Jul 12 00:35:50.938718 ignition[828]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:35:50.938718 ignition[828]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:35:50.938718 ignition[828]: INFO : mount: mount passed Jul 12 00:35:50.938718 ignition[828]: INFO : Ignition finished successfully Jul 12 00:35:50.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:50.939427 systemd[1]: Finished ignition-mount.service. Jul 12 00:35:50.944634 systemd[1]: Finished sysroot-boot.service. Jul 12 00:35:50.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:51.563274 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 12 00:35:51.570425 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (837) Jul 12 00:35:51.574683 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:35:51.574706 kernel: BTRFS info (device vda6): using free space tree Jul 12 00:35:51.574716 kernel: BTRFS info (device vda6): has skinny extents Jul 12 00:35:51.578689 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 12 00:35:51.580330 systemd[1]: Starting ignition-files.service... Jul 12 00:35:51.594348 ignition[857]: INFO : Ignition 2.14.0 Jul 12 00:35:51.594348 ignition[857]: INFO : Stage: files Jul 12 00:35:51.596026 ignition[857]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:35:51.596026 ignition[857]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:35:51.596026 ignition[857]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:35:51.599685 ignition[857]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:35:51.599685 ignition[857]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:35:51.602582 ignition[857]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:35:51.603927 ignition[857]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:35:51.603927 ignition[857]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:35:51.603482 unknown[857]: wrote ssh authorized keys file for user: core Jul 12 00:35:51.607821 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 12 00:35:51.607821 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 12 00:35:51.607821 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 12 00:35:51.607821 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 12 00:35:51.662819 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 12 00:35:51.839993 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 12 00:35:51.839993 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:35:51.843826 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:35:51.843826 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:35:51.843826 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:35:51.843826 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:35:51.843826 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:35:51.843826 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:35:51.843826 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:35:51.843826 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:35:51.843826 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:35:51.843826 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:35:51.843826 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:35:51.843826 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:35:51.843826 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 12 00:35:52.178519 systemd-networkd[740]: eth0: Gained IPv6LL Jul 12 00:35:52.296457 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 12 00:35:53.045389 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:35:53.045389 ignition[857]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 12 00:35:53.049330 ignition[857]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 12 00:35:53.049330 ignition[857]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 12 00:35:53.049330 ignition[857]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 12 00:35:53.049330 ignition[857]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 12 00:35:53.049330 ignition[857]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:35:53.049330 ignition[857]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:35:53.049330 ignition[857]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 12 00:35:53.049330 ignition[857]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jul 12 00:35:53.049330 ignition[857]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 00:35:53.049330 ignition[857]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 00:35:53.049330 ignition[857]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jul 12 00:35:53.049330 ignition[857]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 12 00:35:53.049330 ignition[857]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 00:35:53.049330 ignition[857]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jul 12 00:35:53.049330 ignition[857]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 00:35:53.114232 ignition[857]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 00:35:53.114232 ignition[857]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jul 12 00:35:53.119299 ignition[857]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:35:53.119299 ignition[857]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:35:53.119299 ignition[857]: INFO : files: files passed Jul 12 00:35:53.119299 ignition[857]: INFO : Ignition finished successfully Jul 12 00:35:53.129438 kernel: kauditd_printk_skb: 22 callbacks suppressed Jul 12 00:35:53.129461 kernel: audit: type=1130 audit(1752280553.119:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.117483 systemd[1]: Finished ignition-files.service. Jul 12 00:35:53.120937 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 12 00:35:53.125595 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 12 00:35:53.138493 kernel: audit: type=1130 audit(1752280553.131:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.138517 kernel: audit: type=1131 audit(1752280553.131:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.138667 initrd-setup-root-after-ignition[882]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 12 00:35:53.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.126475 systemd[1]: Starting ignition-quench.service... Jul 12 00:35:53.144791 kernel: audit: type=1130 audit(1752280553.138:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.144852 initrd-setup-root-after-ignition[885]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:35:53.130590 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:35:53.130679 systemd[1]: Finished ignition-quench.service. Jul 12 00:35:53.132449 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 12 00:35:53.139536 systemd[1]: Reached target ignition-complete.target. Jul 12 00:35:53.144869 systemd[1]: Starting initrd-parse-etc.service... Jul 12 00:35:53.158639 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:35:53.158739 systemd[1]: Finished initrd-parse-etc.service. Jul 12 00:35:53.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.160441 systemd[1]: Reached target initrd-fs.target. Jul 12 00:35:53.166776 kernel: audit: type=1130 audit(1752280553.159:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.166803 kernel: audit: type=1131 audit(1752280553.159:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.166163 systemd[1]: Reached target initrd.target. Jul 12 00:35:53.167487 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 12 00:35:53.168217 systemd[1]: Starting dracut-pre-pivot.service... Jul 12 00:35:53.178442 systemd[1]: Finished dracut-pre-pivot.service. Jul 12 00:35:53.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.179961 systemd[1]: Starting initrd-cleanup.service... Jul 12 00:35:53.183623 kernel: audit: type=1130 audit(1752280553.178:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.187876 systemd[1]: Stopped target nss-lookup.target. Jul 12 00:35:53.188797 systemd[1]: Stopped target remote-cryptsetup.target. Jul 12 00:35:53.190211 systemd[1]: Stopped target timers.target. Jul 12 00:35:53.191540 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:35:53.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.191646 systemd[1]: Stopped dracut-pre-pivot.service. Jul 12 00:35:53.197057 kernel: audit: type=1131 audit(1752280553.192:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.192900 systemd[1]: Stopped target initrd.target. Jul 12 00:35:53.196546 systemd[1]: Stopped target basic.target. Jul 12 00:35:53.197782 systemd[1]: Stopped target ignition-complete.target. Jul 12 00:35:53.199133 systemd[1]: Stopped target ignition-diskful.target. Jul 12 00:35:53.200470 systemd[1]: Stopped target initrd-root-device.target. Jul 12 00:35:53.201931 systemd[1]: Stopped target remote-fs.target. Jul 12 00:35:53.203454 systemd[1]: Stopped target remote-fs-pre.target. Jul 12 00:35:53.204890 systemd[1]: Stopped target sysinit.target. Jul 12 00:35:53.206157 systemd[1]: Stopped target local-fs.target. Jul 12 00:35:53.207468 systemd[1]: Stopped target local-fs-pre.target. Jul 12 00:35:53.208759 systemd[1]: Stopped target swap.target. Jul 12 00:35:53.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.209945 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:35:53.215702 kernel: audit: type=1131 audit(1752280553.210:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.210053 systemd[1]: Stopped dracut-pre-mount.service. Jul 12 00:35:53.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.211454 systemd[1]: Stopped target cryptsetup.target. Jul 12 00:35:53.220674 kernel: audit: type=1131 audit(1752280553.215:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.214966 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:35:53.215065 systemd[1]: Stopped dracut-initqueue.service. Jul 12 00:35:53.216522 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:35:53.216612 systemd[1]: Stopped ignition-fetch-offline.service. Jul 12 00:35:53.220184 systemd[1]: Stopped target paths.target. Jul 12 00:35:53.221338 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:35:53.225429 systemd[1]: Stopped systemd-ask-password-console.path. Jul 12 00:35:53.227157 systemd[1]: Stopped target slices.target. Jul 12 00:35:53.228508 systemd[1]: Stopped target sockets.target. Jul 12 00:35:53.229749 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:35:53.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.229855 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 12 00:35:53.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.231203 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:35:53.231295 systemd[1]: Stopped ignition-files.service. Jul 12 00:35:53.233649 systemd[1]: Stopping ignition-mount.service... Jul 12 00:35:53.236844 iscsid[745]: iscsid shutting down. Jul 12 00:35:53.235151 systemd[1]: Stopping iscsid.service... Jul 12 00:35:53.236095 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:35:53.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.236204 systemd[1]: Stopped kmod-static-nodes.service. Jul 12 00:35:53.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.241505 ignition[898]: INFO : Ignition 2.14.0 Jul 12 00:35:53.241505 ignition[898]: INFO : Stage: umount Jul 12 00:35:53.241505 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:35:53.241505 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:35:53.241505 ignition[898]: INFO : umount: umount passed Jul 12 00:35:53.241505 ignition[898]: INFO : Ignition finished successfully Jul 12 00:35:53.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.238317 systemd[1]: Stopping sysroot-boot.service... Jul 12 00:35:53.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.239214 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:35:53.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.239341 systemd[1]: Stopped systemd-udev-trigger.service. Jul 12 00:35:53.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.240897 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:35:53.240989 systemd[1]: Stopped dracut-pre-trigger.service. Jul 12 00:35:53.243656 systemd[1]: iscsid.service: Deactivated successfully. Jul 12 00:35:53.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.243748 systemd[1]: Stopped iscsid.service. Jul 12 00:35:53.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.245635 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:35:53.245703 systemd[1]: Stopped ignition-mount.service. Jul 12 00:35:53.247577 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:35:53.247643 systemd[1]: Closed iscsid.socket. Jul 12 00:35:53.248716 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:35:53.248756 systemd[1]: Stopped ignition-disks.service. Jul 12 00:35:53.251052 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:35:53.251093 systemd[1]: Stopped ignition-kargs.service. Jul 12 00:35:53.253059 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:35:53.253114 systemd[1]: Stopped ignition-setup.service. Jul 12 00:35:53.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.255212 systemd[1]: Stopping iscsiuio.service... Jul 12 00:35:53.258638 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:35:53.259070 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 12 00:35:53.259149 systemd[1]: Stopped iscsiuio.service. Jul 12 00:35:53.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.260423 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:35:53.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.260496 systemd[1]: Finished initrd-cleanup.service. Jul 12 00:35:53.263234 systemd[1]: Stopped target network.target. Jul 12 00:35:53.264155 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:35:53.264194 systemd[1]: Closed iscsiuio.socket. Jul 12 00:35:53.266087 systemd[1]: Stopping systemd-networkd.service... Jul 12 00:35:53.267084 systemd[1]: Stopping systemd-resolved.service... Jul 12 00:35:53.272460 systemd-networkd[740]: eth0: DHCPv6 lease lost Jul 12 00:35:53.273880 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:35:53.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.294000 audit: BPF prog-id=9 op=UNLOAD Jul 12 00:35:53.273979 systemd[1]: Stopped systemd-networkd.service. Jul 12 00:35:53.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.296000 audit: BPF prog-id=6 op=UNLOAD Jul 12 00:35:53.275797 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:35:53.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.275828 systemd[1]: Closed systemd-networkd.socket. Jul 12 00:35:53.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.278083 systemd[1]: Stopping network-cleanup.service... Jul 12 00:35:53.279659 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:35:53.279721 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 12 00:35:53.282271 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:35:53.282317 systemd[1]: Stopped systemd-sysctl.service. Jul 12 00:35:53.283791 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:35:53.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.283836 systemd[1]: Stopped systemd-modules-load.service. Jul 12 00:35:53.284844 systemd[1]: Stopping systemd-udevd.service... Jul 12 00:35:53.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.290314 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 12 00:35:53.290790 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:35:53.290885 systemd[1]: Stopped systemd-resolved.service. Jul 12 00:35:53.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:53.296007 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:35:53.296096 systemd[1]: Stopped network-cleanup.service. Jul 12 00:35:53.297483 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:35:53.297597 systemd[1]: Stopped systemd-udevd.service. Jul 12 00:35:53.298824 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:35:53.298888 systemd[1]: Stopped sysroot-boot.service. Jul 12 00:35:53.300229 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:35:53.300262 systemd[1]: Closed systemd-udevd-control.socket. Jul 12 00:35:53.301514 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:35:53.301548 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 12 00:35:53.302981 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:35:53.326000 audit: BPF prog-id=8 op=UNLOAD Jul 12 00:35:53.327000 audit: BPF prog-id=7 op=UNLOAD Jul 12 00:35:53.327000 audit: BPF prog-id=5 op=UNLOAD Jul 12 00:35:53.327000 audit: BPF prog-id=4 op=UNLOAD Jul 12 00:35:53.327000 audit: BPF prog-id=3 op=UNLOAD Jul 12 00:35:53.303023 systemd[1]: Stopped dracut-pre-udev.service. Jul 12 00:35:53.304615 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:35:53.304654 systemd[1]: Stopped dracut-cmdline.service. Jul 12 00:35:53.305945 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:35:53.305984 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 12 00:35:53.307598 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:35:53.307637 systemd[1]: Stopped initrd-setup-root.service. Jul 12 00:35:53.310432 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 12 00:35:53.311194 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:35:53.311248 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 12 00:35:53.315592 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:35:53.315671 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 12 00:35:53.316780 systemd[1]: Reached target initrd-switch-root.target. Jul 12 00:35:53.318980 systemd[1]: Starting initrd-switch-root.service... Jul 12 00:35:53.341629 systemd-journald[290]: Received SIGTERM from PID 1 (n/a). Jul 12 00:35:53.324992 systemd[1]: Switching root. Jul 12 00:35:53.342198 systemd-journald[290]: Journal stopped Jul 12 00:35:55.379076 kernel: SELinux: Class mctp_socket not defined in policy. Jul 12 00:35:55.379138 kernel: SELinux: Class anon_inode not defined in policy. Jul 12 00:35:55.379153 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 12 00:35:55.379163 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 00:35:55.379173 kernel: SELinux: policy capability open_perms=1 Jul 12 00:35:55.379223 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 00:35:55.379241 kernel: SELinux: policy capability always_check_network=0 Jul 12 00:35:55.379251 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 00:35:55.379261 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 00:35:55.379270 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 00:35:55.379280 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 00:35:55.379298 systemd[1]: Successfully loaded SELinux policy in 34.736ms. Jul 12 00:35:55.379344 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.703ms. Jul 12 00:35:55.379363 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 12 00:35:55.379376 systemd[1]: Detected virtualization kvm. Jul 12 00:35:55.379407 systemd[1]: Detected architecture arm64. Jul 12 00:35:55.379427 systemd[1]: Detected first boot. Jul 12 00:35:55.379445 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:35:55.379456 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 12 00:35:55.379468 systemd[1]: Populated /etc with preset unit settings. Jul 12 00:35:55.379479 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:35:55.379490 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:35:55.379501 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:35:55.379514 systemd[1]: Queued start job for default target multi-user.target. Jul 12 00:35:55.379526 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 12 00:35:55.379536 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 12 00:35:55.379548 systemd[1]: Created slice system-addon\x2drun.slice. Jul 12 00:35:55.379558 systemd[1]: Created slice system-getty.slice. Jul 12 00:35:55.379567 systemd[1]: Created slice system-modprobe.slice. Jul 12 00:35:55.379578 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 12 00:35:55.379592 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 12 00:35:55.379603 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 12 00:35:55.379613 systemd[1]: Created slice user.slice. Jul 12 00:35:55.379624 systemd[1]: Started systemd-ask-password-console.path. Jul 12 00:35:55.379637 systemd[1]: Started systemd-ask-password-wall.path. Jul 12 00:35:55.379648 systemd[1]: Set up automount boot.automount. Jul 12 00:35:55.379658 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 12 00:35:55.379669 systemd[1]: Reached target integritysetup.target. Jul 12 00:35:55.379678 systemd[1]: Reached target remote-cryptsetup.target. Jul 12 00:35:55.379689 systemd[1]: Reached target remote-fs.target. Jul 12 00:35:55.379704 systemd[1]: Reached target slices.target. Jul 12 00:35:55.379716 systemd[1]: Reached target swap.target. Jul 12 00:35:55.379726 systemd[1]: Reached target torcx.target. Jul 12 00:35:55.379737 systemd[1]: Reached target veritysetup.target. Jul 12 00:35:55.379747 systemd[1]: Listening on systemd-coredump.socket. Jul 12 00:35:55.379757 systemd[1]: Listening on systemd-initctl.socket. Jul 12 00:35:55.379767 systemd[1]: Listening on systemd-journald-audit.socket. Jul 12 00:35:55.379777 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 12 00:35:55.379787 systemd[1]: Listening on systemd-journald.socket. Jul 12 00:35:55.379798 systemd[1]: Listening on systemd-networkd.socket. Jul 12 00:35:55.379813 systemd[1]: Listening on systemd-udevd-control.socket. Jul 12 00:35:55.379824 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 12 00:35:55.379834 systemd[1]: Listening on systemd-userdbd.socket. Jul 12 00:35:55.379845 systemd[1]: Mounting dev-hugepages.mount... Jul 12 00:35:55.379855 systemd[1]: Mounting dev-mqueue.mount... Jul 12 00:35:55.379865 systemd[1]: Mounting media.mount... Jul 12 00:35:55.379880 systemd[1]: Mounting sys-kernel-debug.mount... Jul 12 00:35:55.379896 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 12 00:35:55.379906 systemd[1]: Mounting tmp.mount... Jul 12 00:35:55.379916 systemd[1]: Starting flatcar-tmpfiles.service... Jul 12 00:35:55.379929 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:35:55.379939 systemd[1]: Starting kmod-static-nodes.service... Jul 12 00:35:55.379950 systemd[1]: Starting modprobe@configfs.service... Jul 12 00:35:55.379960 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:35:55.379974 systemd[1]: Starting modprobe@drm.service... Jul 12 00:35:55.379986 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:35:55.379996 systemd[1]: Starting modprobe@fuse.service... Jul 12 00:35:55.380006 systemd[1]: Starting modprobe@loop.service... Jul 12 00:35:55.380017 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 00:35:55.380030 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 12 00:35:55.380042 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 12 00:35:55.380056 systemd[1]: Starting systemd-journald.service... Jul 12 00:35:55.380066 kernel: fuse: init (API version 7.34) Jul 12 00:35:55.380076 systemd[1]: Starting systemd-modules-load.service... Jul 12 00:35:55.380087 systemd[1]: Starting systemd-network-generator.service... Jul 12 00:35:55.380097 systemd[1]: Starting systemd-remount-fs.service... Jul 12 00:35:55.380107 systemd[1]: Starting systemd-udev-trigger.service... Jul 12 00:35:55.380117 systemd[1]: Mounted dev-hugepages.mount. Jul 12 00:35:55.380129 systemd[1]: Mounted dev-mqueue.mount. Jul 12 00:35:55.380139 systemd[1]: Mounted media.mount. Jul 12 00:35:55.380149 systemd[1]: Mounted sys-kernel-debug.mount. Jul 12 00:35:55.380159 kernel: loop: module loaded Jul 12 00:35:55.380169 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 12 00:35:55.380182 systemd-journald[1028]: Journal started Jul 12 00:35:55.380230 systemd-journald[1028]: Runtime Journal (/run/log/journal/a9bf8af831a54e6c8a0564c722d8c41d) is 6.0M, max 48.7M, 42.6M free. Jul 12 00:35:55.380264 systemd[1]: Mounted tmp.mount. Jul 12 00:35:55.291000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 12 00:35:55.291000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 12 00:35:55.374000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 12 00:35:55.374000 audit[1028]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=fffff4713bf0 a2=4000 a3=1 items=0 ppid=1 pid=1028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:35:55.374000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 12 00:35:55.383009 systemd[1]: Started systemd-journald.service. Jul 12 00:35:55.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:55.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:55.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:55.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:55.384673 systemd[1]: Finished kmod-static-nodes.service. Jul 12 00:35:55.385823 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 00:35:55.386028 systemd[1]: Finished modprobe@configfs.service. Jul 12 00:35:55.387222 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:35:55.387370 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:35:55.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:55.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:55.388514 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:35:55.389622 systemd[1]: Finished modprobe@drm.service. Jul 12 00:35:55.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:55.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:55.390731 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:35:55.390882 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:35:55.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:55.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:55.392220 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 00:35:55.393628 systemd[1]: Finished modprobe@fuse.service. Jul 12 00:35:55.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:55.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:55.394923 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:35:55.395145 systemd[1]: Finished modprobe@loop.service. Jul 12 00:35:55.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:55.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:55.396422 systemd[1]: Finished systemd-modules-load.service. Jul 12 00:35:55.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:55.397700 systemd[1]: Finished systemd-network-generator.service. Jul 12 00:35:55.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:55.399444 systemd[1]: Finished systemd-remount-fs.service. Jul 12 00:35:55.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:55.400769 systemd[1]: Reached target network-pre.target. Jul 12 00:35:55.402693 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 12 00:35:55.405550 systemd[1]: Mounting sys-kernel-config.mount... Jul 12 00:35:55.406288 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 00:35:55.408032 systemd[1]: Starting systemd-hwdb-update.service... Jul 12 00:35:55.410229 systemd[1]: Starting systemd-journal-flush.service... Jul 12 00:35:55.411216 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:35:55.412499 systemd[1]: Starting systemd-random-seed.service... Jul 12 00:35:55.413437 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:35:55.414654 systemd[1]: Starting systemd-sysctl.service... Jul 12 00:35:55.418480 systemd[1]: Finished flatcar-tmpfiles.service. Jul 12 00:35:55.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:55.419609 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 12 00:35:55.420762 systemd[1]: Mounted sys-kernel-config.mount. Jul 12 00:35:55.421884 systemd[1]: Finished systemd-random-seed.service. Jul 12 00:35:55.422707 systemd-journald[1028]: Time spent on flushing to /var/log/journal/a9bf8af831a54e6c8a0564c722d8c41d is 13.597ms for 935 entries. Jul 12 00:35:55.422707 systemd-journald[1028]: System Journal (/var/log/journal/a9bf8af831a54e6c8a0564c722d8c41d) is 8.0M, max 195.6M, 187.6M free. Jul 12 00:35:55.445684 systemd-journald[1028]: Received client request to flush runtime journal. Jul 12 00:35:55.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:55.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:55.423949 systemd[1]: Reached target first-boot-complete.target. Jul 12 00:35:55.426362 systemd[1]: Starting systemd-sysusers.service... Jul 12 00:35:55.436089 systemd[1]: Finished systemd-sysctl.service. Jul 12 00:35:55.446067 systemd[1]: Finished systemd-udev-trigger.service. Jul 12 00:35:55.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:55.447547 systemd[1]: Finished systemd-journal-flush.service. Jul 12 00:35:55.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:55.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:55.448781 systemd[1]: Finished systemd-sysusers.service. Jul 12 00:35:55.451006 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 12 00:35:55.453136 systemd[1]: Starting systemd-udev-settle.service... Jul 12 00:35:55.461630 udevadm[1088]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 12 00:35:55.470822 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 12 00:35:55.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:55.822533 systemd[1]: Finished systemd-hwdb-update.service. Jul 12 00:35:55.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:55.824737 systemd[1]: Starting systemd-udevd.service... Jul 12 00:35:55.846848 systemd-udevd[1091]: Using default interface naming scheme 'v252'. Jul 12 00:35:55.860611 systemd[1]: Started systemd-udevd.service. Jul 12 00:35:55.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:55.863002 systemd[1]: Starting systemd-networkd.service... Jul 12 00:35:55.874207 systemd[1]: Starting systemd-userdbd.service... Jul 12 00:35:55.878519 systemd[1]: Found device dev-ttyAMA0.device. Jul 12 00:35:55.921296 systemd[1]: Started systemd-userdbd.service. Jul 12 00:35:55.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:55.935979 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 12 00:35:55.965794 systemd[1]: Finished systemd-udev-settle.service. Jul 12 00:35:55.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:55.968002 systemd[1]: Starting lvm2-activation-early.service... Jul 12 00:35:55.988236 lvm[1125]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:35:55.993692 systemd-networkd[1098]: lo: Link UP Jul 12 00:35:55.993955 systemd-networkd[1098]: lo: Gained carrier Jul 12 00:35:55.994434 systemd-networkd[1098]: Enumeration completed Jul 12 00:35:55.994650 systemd-networkd[1098]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:35:55.994653 systemd[1]: Started systemd-networkd.service. Jul 12 00:35:55.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:55.996109 systemd-networkd[1098]: eth0: Link UP Jul 12 00:35:55.996190 systemd-networkd[1098]: eth0: Gained carrier Jul 12 00:35:56.014534 systemd-networkd[1098]: eth0: DHCPv4 address 10.0.0.111/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:35:56.021197 systemd[1]: Finished lvm2-activation-early.service. Jul 12 00:35:56.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:56.022277 systemd[1]: Reached target cryptsetup.target. Jul 12 00:35:56.024342 systemd[1]: Starting lvm2-activation.service... Jul 12 00:35:56.027957 lvm[1127]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:35:56.054299 systemd[1]: Finished lvm2-activation.service. Jul 12 00:35:56.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:56.055284 systemd[1]: Reached target local-fs-pre.target. Jul 12 00:35:56.056173 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 00:35:56.056207 systemd[1]: Reached target local-fs.target. Jul 12 00:35:56.057027 systemd[1]: Reached target machines.target. Jul 12 00:35:56.059183 systemd[1]: Starting ldconfig.service... Jul 12 00:35:56.060241 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:35:56.060293 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:35:56.061378 systemd[1]: Starting systemd-boot-update.service... Jul 12 00:35:56.063178 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 12 00:35:56.065341 systemd[1]: Starting systemd-machine-id-commit.service... Jul 12 00:35:56.067372 systemd[1]: Starting systemd-sysext.service... Jul 12 00:35:56.068601 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1130 (bootctl) Jul 12 00:35:56.069630 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 12 00:35:56.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:56.075189 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 12 00:35:56.080112 systemd[1]: Unmounting usr-share-oem.mount... Jul 12 00:35:56.083864 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 12 00:35:56.084105 systemd[1]: Unmounted usr-share-oem.mount. Jul 12 00:35:56.095422 kernel: loop0: detected capacity change from 0 to 203944 Jul 12 00:35:56.138245 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 00:35:56.139469 systemd[1]: Finished systemd-machine-id-commit.service. Jul 12 00:35:56.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:56.143691 systemd-fsck[1140]: fsck.fat 4.2 (2021-01-31) Jul 12 00:35:56.143691 systemd-fsck[1140]: /dev/vda1: 236 files, 117310/258078 clusters Jul 12 00:35:56.145504 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 12 00:35:56.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:56.148376 systemd[1]: Mounting boot.mount... Jul 12 00:35:56.153416 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 00:35:56.158093 systemd[1]: Mounted boot.mount. Jul 12 00:35:56.167827 systemd[1]: Finished systemd-boot-update.service. Jul 12 00:35:56.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:56.174449 kernel: loop1: detected capacity change from 0 to 203944 Jul 12 00:35:56.179760 (sd-sysext)[1153]: Using extensions 'kubernetes'. Jul 12 00:35:56.181044 (sd-sysext)[1153]: Merged extensions into '/usr'. Jul 12 00:35:56.199023 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:35:56.200333 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:35:56.202235 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:35:56.204254 systemd[1]: Starting modprobe@loop.service... Jul 12 00:35:56.205040 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:35:56.205174 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:35:56.205947 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:35:56.206088 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:35:56.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:56.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:56.207444 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:35:56.207618 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:35:56.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:56.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:56.209052 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:35:56.209193 systemd[1]: Finished modprobe@loop.service. Jul 12 00:35:56.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:56.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:56.210458 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:35:56.210545 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:35:56.250775 ldconfig[1129]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 00:35:56.254324 systemd[1]: Finished ldconfig.service. Jul 12 00:35:56.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:56.372532 systemd[1]: Mounting usr-share-oem.mount... Jul 12 00:35:56.377566 systemd[1]: Mounted usr-share-oem.mount. Jul 12 00:35:56.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:56.379409 systemd[1]: Finished systemd-sysext.service. Jul 12 00:35:56.382083 systemd[1]: Starting ensure-sysext.service... Jul 12 00:35:56.383773 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 12 00:35:56.388004 systemd[1]: Reloading. Jul 12 00:35:56.392559 systemd-tmpfiles[1168]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 12 00:35:56.393236 systemd-tmpfiles[1168]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 00:35:56.394510 systemd-tmpfiles[1168]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 00:35:56.413553 /usr/lib/systemd/system-generators/torcx-generator[1188]: time="2025-07-12T00:35:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 12 00:35:56.413582 /usr/lib/systemd/system-generators/torcx-generator[1188]: time="2025-07-12T00:35:56Z" level=info msg="torcx already run" Jul 12 00:35:56.488146 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:35:56.488169 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:35:56.503486 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:35:56.548174 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 12 00:35:56.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:56.552187 systemd[1]: Starting audit-rules.service... Jul 12 00:35:56.553969 systemd[1]: Starting clean-ca-certificates.service... Jul 12 00:35:56.556015 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 12 00:35:56.558648 systemd[1]: Starting systemd-resolved.service... Jul 12 00:35:56.560831 systemd[1]: Starting systemd-timesyncd.service... Jul 12 00:35:56.562684 systemd[1]: Starting systemd-update-utmp.service... Jul 12 00:35:56.564098 systemd[1]: Finished clean-ca-certificates.service. Jul 12 00:35:56.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:56.567119 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:35:56.572000 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:35:56.572000 audit[1243]: SYSTEM_BOOT pid=1243 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 12 00:35:56.573246 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:35:56.575031 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:35:56.576847 systemd[1]: Starting modprobe@loop.service... Jul 12 00:35:56.577617 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:35:56.577744 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:35:56.577829 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:35:56.578683 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 12 00:35:56.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:56.580102 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:35:56.580237 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:35:56.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:56.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:56.581848 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:35:56.581980 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:35:56.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:56.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:56.583573 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:35:56.583725 systemd[1]: Finished modprobe@loop.service. Jul 12 00:35:56.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:56.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:56.587029 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:35:56.587173 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:35:56.588563 systemd[1]: Starting systemd-update-done.service... Jul 12 00:35:56.594614 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:35:56.595778 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:35:56.597592 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:35:56.599743 systemd[1]: Starting modprobe@loop.service... Jul 12 00:35:56.601405 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:35:56.601543 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:35:56.601640 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:35:56.602545 systemd[1]: Finished systemd-update-utmp.service. Jul 12 00:35:56.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:56.605024 systemd[1]: Finished systemd-update-done.service. Jul 12 00:35:56.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:56.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:56.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:56.606227 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:35:56.606357 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:35:56.607533 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:35:56.607710 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:35:56.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:56.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:56.609071 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:35:56.609214 systemd[1]: Finished modprobe@loop.service. Jul 12 00:35:56.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:56.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:35:56.611253 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:35:56.611367 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:35:56.614249 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:35:56.615471 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:35:56.617355 systemd[1]: Starting modprobe@drm.service... Jul 12 00:35:56.619112 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:35:56.621093 systemd[1]: Starting modprobe@loop.service... Jul 12 00:35:56.622059 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:35:56.622188 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:35:56.623488 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 12 00:35:56.624544 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:35:56.625725 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:35:56.625936 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:35:56.625000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 12 00:35:56.625000 audit[1280]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd8921190 a2=420 a3=0 items=0 ppid=1234 pid=1280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:35:56.625000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 12 00:35:56.626343 augenrules[1280]: No rules Jul 12 00:35:56.627583 systemd[1]: Finished audit-rules.service. Jul 12 00:35:56.628800 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:35:56.628994 systemd[1]: Finished modprobe@drm.service. Jul 12 00:35:56.630639 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:35:56.630828 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:35:56.632618 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:35:56.632896 systemd[1]: Finished modprobe@loop.service. Jul 12 00:35:56.634435 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:35:56.634507 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:35:56.635510 systemd[1]: Finished ensure-sysext.service. Jul 12 00:35:56.644988 systemd[1]: Started systemd-timesyncd.service. Jul 12 00:35:56.646111 systemd-timesyncd[1240]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 12 00:35:56.646215 systemd[1]: Reached target time-set.target. Jul 12 00:35:56.646459 systemd-timesyncd[1240]: Initial clock synchronization to Sat 2025-07-12 00:35:56.877341 UTC. Jul 12 00:35:56.660466 systemd-resolved[1238]: Positive Trust Anchors: Jul 12 00:35:56.660479 systemd-resolved[1238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:35:56.660505 systemd-resolved[1238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 12 00:35:56.673965 systemd-resolved[1238]: Defaulting to hostname 'linux'. Jul 12 00:35:56.675344 systemd[1]: Started systemd-resolved.service. Jul 12 00:35:56.676208 systemd[1]: Reached target network.target. Jul 12 00:35:56.676977 systemd[1]: Reached target nss-lookup.target. Jul 12 00:35:56.677774 systemd[1]: Reached target sysinit.target. Jul 12 00:35:56.678605 systemd[1]: Started motdgen.path. Jul 12 00:35:56.679298 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 12 00:35:56.680532 systemd[1]: Started logrotate.timer. Jul 12 00:35:56.681297 systemd[1]: Started mdadm.timer. Jul 12 00:35:56.681981 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 12 00:35:56.682816 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 00:35:56.682851 systemd[1]: Reached target paths.target. Jul 12 00:35:56.683570 systemd[1]: Reached target timers.target. Jul 12 00:35:56.684741 systemd[1]: Listening on dbus.socket. Jul 12 00:35:56.686443 systemd[1]: Starting docker.socket... Jul 12 00:35:56.688092 systemd[1]: Listening on sshd.socket. Jul 12 00:35:56.688930 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:35:56.689214 systemd[1]: Listening on docker.socket. Jul 12 00:35:56.689991 systemd[1]: Reached target sockets.target. Jul 12 00:35:56.690769 systemd[1]: Reached target basic.target. Jul 12 00:35:56.691627 systemd[1]: System is tainted: cgroupsv1 Jul 12 00:35:56.691676 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 12 00:35:56.691695 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 12 00:35:56.692674 systemd[1]: Starting containerd.service... Jul 12 00:35:56.694338 systemd[1]: Starting dbus.service... Jul 12 00:35:56.696124 systemd[1]: Starting enable-oem-cloudinit.service... Jul 12 00:35:56.698188 systemd[1]: Starting extend-filesystems.service... Jul 12 00:35:56.699087 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 12 00:35:56.700371 systemd[1]: Starting motdgen.service... Jul 12 00:35:56.701157 jq[1296]: false Jul 12 00:35:56.703039 systemd[1]: Starting prepare-helm.service... Jul 12 00:35:56.705078 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 12 00:35:56.707053 systemd[1]: Starting sshd-keygen.service... Jul 12 00:35:56.709558 systemd[1]: Starting systemd-logind.service... Jul 12 00:35:56.710272 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:35:56.710334 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 00:35:56.711596 systemd[1]: Starting update-engine.service... Jul 12 00:35:56.714373 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 12 00:35:56.716936 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 00:35:56.720463 jq[1313]: true Jul 12 00:35:56.721662 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 12 00:35:56.722798 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 00:35:56.723096 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 12 00:35:56.732450 jq[1322]: true Jul 12 00:35:56.743815 extend-filesystems[1297]: Found loop1 Jul 12 00:35:56.743815 extend-filesystems[1297]: Found vda Jul 12 00:35:56.743815 extend-filesystems[1297]: Found vda1 Jul 12 00:35:56.743815 extend-filesystems[1297]: Found vda2 Jul 12 00:35:56.743815 extend-filesystems[1297]: Found vda3 Jul 12 00:35:56.743815 extend-filesystems[1297]: Found usr Jul 12 00:35:56.743815 extend-filesystems[1297]: Found vda4 Jul 12 00:35:56.743815 extend-filesystems[1297]: Found vda6 Jul 12 00:35:56.743815 extend-filesystems[1297]: Found vda7 Jul 12 00:35:56.743815 extend-filesystems[1297]: Found vda9 Jul 12 00:35:56.743815 extend-filesystems[1297]: Checking size of /dev/vda9 Jul 12 00:35:56.773764 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 12 00:35:56.773827 tar[1320]: linux-arm64/helm Jul 12 00:35:56.743880 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 00:35:56.758924 dbus-daemon[1295]: [system] SELinux support is enabled Jul 12 00:35:56.774203 extend-filesystems[1297]: Resized partition /dev/vda9 Jul 12 00:35:56.744237 systemd[1]: Finished motdgen.service. Jul 12 00:35:56.782787 extend-filesystems[1343]: resize2fs 1.46.5 (30-Dec-2021) Jul 12 00:35:56.759080 systemd[1]: Started dbus.service. Jul 12 00:35:56.773848 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 00:35:56.773891 systemd[1]: Reached target system-config.target. Jul 12 00:35:56.775251 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 00:35:56.775266 systemd[1]: Reached target user-config.target. Jul 12 00:35:56.799361 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 12 00:35:56.811390 update_engine[1311]: I0712 00:35:56.810909 1311 main.cc:92] Flatcar Update Engine starting Jul 12 00:35:56.812950 systemd-logind[1309]: Watching system buttons on /dev/input/event0 (Power Button) Jul 12 00:35:56.813988 extend-filesystems[1343]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 12 00:35:56.813988 extend-filesystems[1343]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 12 00:35:56.813988 extend-filesystems[1343]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 12 00:35:56.813605 systemd-logind[1309]: New seat seat0. Jul 12 00:35:56.824846 update_engine[1311]: I0712 00:35:56.824100 1311 update_check_scheduler.cc:74] Next update check in 6m57s Jul 12 00:35:56.824871 extend-filesystems[1297]: Resized filesystem in /dev/vda9 Jul 12 00:35:56.815188 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 00:35:56.816125 systemd[1]: Finished extend-filesystems.service. Jul 12 00:35:56.821932 systemd[1]: Started systemd-logind.service. Jul 12 00:35:56.824036 systemd[1]: Started update-engine.service. Jul 12 00:35:56.827328 systemd[1]: Started locksmithd.service. Jul 12 00:35:56.831796 bash[1353]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:35:56.832706 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 12 00:35:56.841769 env[1324]: time="2025-07-12T00:35:56.841724360Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 12 00:35:56.860881 env[1324]: time="2025-07-12T00:35:56.860832440Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 12 00:35:56.861011 env[1324]: time="2025-07-12T00:35:56.860969200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:35:56.862149 env[1324]: time="2025-07-12T00:35:56.862113240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.186-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:35:56.862149 env[1324]: time="2025-07-12T00:35:56.862146840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:35:56.862393 env[1324]: time="2025-07-12T00:35:56.862354480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:35:56.862494 env[1324]: time="2025-07-12T00:35:56.862378240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 12 00:35:56.862494 env[1324]: time="2025-07-12T00:35:56.862415080Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 12 00:35:56.862494 env[1324]: time="2025-07-12T00:35:56.862424920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 12 00:35:56.862558 env[1324]: time="2025-07-12T00:35:56.862499520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:35:56.862844 env[1324]: time="2025-07-12T00:35:56.862813880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:35:56.863028 env[1324]: time="2025-07-12T00:35:56.862965360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:35:56.863028 env[1324]: time="2025-07-12T00:35:56.862984560Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 12 00:35:56.863091 env[1324]: time="2025-07-12T00:35:56.863034200Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 12 00:35:56.863091 env[1324]: time="2025-07-12T00:35:56.863046800Z" level=info msg="metadata content store policy set" policy=shared Jul 12 00:35:56.866395 env[1324]: time="2025-07-12T00:35:56.866324000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 12 00:35:56.866395 env[1324]: time="2025-07-12T00:35:56.866368560Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 12 00:35:56.866482 env[1324]: time="2025-07-12T00:35:56.866400760Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 12 00:35:56.866537 env[1324]: time="2025-07-12T00:35:56.866514800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 12 00:35:56.866565 env[1324]: time="2025-07-12T00:35:56.866535360Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 12 00:35:56.866565 env[1324]: time="2025-07-12T00:35:56.866550360Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 12 00:35:56.866565 env[1324]: time="2025-07-12T00:35:56.866563880Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 12 00:35:56.867089 env[1324]: time="2025-07-12T00:35:56.866982240Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 12 00:35:56.867125 env[1324]: time="2025-07-12T00:35:56.867098120Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 12 00:35:56.867125 env[1324]: time="2025-07-12T00:35:56.867113720Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 12 00:35:56.867165 env[1324]: time="2025-07-12T00:35:56.867126360Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 12 00:35:56.867165 env[1324]: time="2025-07-12T00:35:56.867139720Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 12 00:35:56.867279 env[1324]: time="2025-07-12T00:35:56.867240360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 12 00:35:56.867482 env[1324]: time="2025-07-12T00:35:56.867460240Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 12 00:35:56.867917 env[1324]: time="2025-07-12T00:35:56.867893640Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 12 00:35:56.867949 env[1324]: time="2025-07-12T00:35:56.867928320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 12 00:35:56.867949 env[1324]: time="2025-07-12T00:35:56.867942240Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 12 00:35:56.868065 env[1324]: time="2025-07-12T00:35:56.868048840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 12 00:35:56.868096 env[1324]: time="2025-07-12T00:35:56.868066240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 12 00:35:56.868096 env[1324]: time="2025-07-12T00:35:56.868079200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 12 00:35:56.868096 env[1324]: time="2025-07-12T00:35:56.868091680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 12 00:35:56.868156 env[1324]: time="2025-07-12T00:35:56.868103920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 12 00:35:56.868156 env[1324]: time="2025-07-12T00:35:56.868116160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 12 00:35:56.868156 env[1324]: time="2025-07-12T00:35:56.868127720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 12 00:35:56.868218 env[1324]: time="2025-07-12T00:35:56.868162720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 12 00:35:56.868218 env[1324]: time="2025-07-12T00:35:56.868182000Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 12 00:35:56.868320 env[1324]: time="2025-07-12T00:35:56.868298600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 12 00:35:56.868352 env[1324]: time="2025-07-12T00:35:56.868323600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 12 00:35:56.868352 env[1324]: time="2025-07-12T00:35:56.868336320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 12 00:35:56.868352 env[1324]: time="2025-07-12T00:35:56.868347600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 12 00:35:56.868442 env[1324]: time="2025-07-12T00:35:56.868362000Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 12 00:35:56.868442 env[1324]: time="2025-07-12T00:35:56.868372560Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 12 00:35:56.868442 env[1324]: time="2025-07-12T00:35:56.868427600Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 12 00:35:56.868502 env[1324]: time="2025-07-12T00:35:56.868461320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 12 00:35:56.868698 env[1324]: time="2025-07-12T00:35:56.868644640Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 12 00:35:56.870669 env[1324]: time="2025-07-12T00:35:56.868704320Z" level=info msg="Connect containerd service" Jul 12 00:35:56.870669 env[1324]: time="2025-07-12T00:35:56.868736560Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 12 00:35:56.870669 env[1324]: time="2025-07-12T00:35:56.869479280Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:35:56.870669 env[1324]: time="2025-07-12T00:35:56.869822680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 00:35:56.870669 env[1324]: time="2025-07-12T00:35:56.869858640Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 00:35:56.869985 systemd[1]: Started containerd.service. Jul 12 00:35:56.870826 env[1324]: time="2025-07-12T00:35:56.870767520Z" level=info msg="Start subscribing containerd event" Jul 12 00:35:56.870848 env[1324]: time="2025-07-12T00:35:56.870825800Z" level=info msg="Start recovering state" Jul 12 00:35:56.870916 env[1324]: time="2025-07-12T00:35:56.870888680Z" level=info msg="Start event monitor" Jul 12 00:35:56.870946 env[1324]: time="2025-07-12T00:35:56.870920800Z" level=info msg="Start snapshots syncer" Jul 12 00:35:56.870946 env[1324]: time="2025-07-12T00:35:56.870933080Z" level=info msg="Start cni network conf syncer for default" Jul 12 00:35:56.870946 env[1324]: time="2025-07-12T00:35:56.870942240Z" level=info msg="Start streaming server" Jul 12 00:35:56.871002 env[1324]: time="2025-07-12T00:35:56.870981040Z" level=info msg="containerd successfully booted in 0.029916s" Jul 12 00:35:56.897273 locksmithd[1357]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 00:35:57.135369 tar[1320]: linux-arm64/LICENSE Jul 12 00:35:57.135500 tar[1320]: linux-arm64/README.md Jul 12 00:35:57.139888 systemd[1]: Finished prepare-helm.service. Jul 12 00:35:57.678080 sshd_keygen[1318]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 00:35:57.695971 systemd[1]: Finished sshd-keygen.service. Jul 12 00:35:57.698321 systemd[1]: Starting issuegen.service... Jul 12 00:35:57.703163 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 00:35:57.703368 systemd[1]: Finished issuegen.service. Jul 12 00:35:57.705552 systemd[1]: Starting systemd-user-sessions.service... Jul 12 00:35:57.711450 systemd[1]: Finished systemd-user-sessions.service. Jul 12 00:35:57.713636 systemd[1]: Started getty@tty1.service. Jul 12 00:35:57.715643 systemd[1]: Started serial-getty@ttyAMA0.service. Jul 12 00:35:57.716736 systemd[1]: Reached target getty.target. Jul 12 00:35:58.002641 systemd-networkd[1098]: eth0: Gained IPv6LL Jul 12 00:35:58.004493 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 12 00:35:58.006608 systemd[1]: Reached target network-online.target. Jul 12 00:35:58.009725 systemd[1]: Starting kubelet.service... Jul 12 00:35:58.640349 systemd[1]: Started kubelet.service. Jul 12 00:35:58.641731 systemd[1]: Reached target multi-user.target. Jul 12 00:35:58.643980 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 12 00:35:58.650107 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 12 00:35:58.650304 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 12 00:35:58.651500 systemd[1]: Startup finished in 5.409s (kernel) + 5.254s (userspace) = 10.664s. Jul 12 00:35:59.134774 kubelet[1395]: E0712 00:35:59.134671 1395 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:35:59.136719 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:35:59.136866 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:36:00.903822 systemd[1]: Created slice system-sshd.slice. Jul 12 00:36:00.904975 systemd[1]: Started sshd@0-10.0.0.111:22-10.0.0.1:40380.service. Jul 12 00:36:00.968912 sshd[1405]: Accepted publickey for core from 10.0.0.1 port 40380 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:36:00.972928 sshd[1405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:36:00.983609 systemd-logind[1309]: New session 1 of user core. Jul 12 00:36:00.984384 systemd[1]: Created slice user-500.slice. Jul 12 00:36:00.985294 systemd[1]: Starting user-runtime-dir@500.service... Jul 12 00:36:00.993599 systemd[1]: Finished user-runtime-dir@500.service. Jul 12 00:36:00.994741 systemd[1]: Starting user@500.service... Jul 12 00:36:00.997660 (systemd)[1410]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:36:01.056351 systemd[1410]: Queued start job for default target default.target. Jul 12 00:36:01.056581 systemd[1410]: Reached target paths.target. Jul 12 00:36:01.056595 systemd[1410]: Reached target sockets.target. Jul 12 00:36:01.056606 systemd[1410]: Reached target timers.target. Jul 12 00:36:01.056616 systemd[1410]: Reached target basic.target. Jul 12 00:36:01.056656 systemd[1410]: Reached target default.target. Jul 12 00:36:01.056677 systemd[1410]: Startup finished in 53ms. Jul 12 00:36:01.057152 systemd[1]: Started user@500.service. Jul 12 00:36:01.057990 systemd[1]: Started session-1.scope. Jul 12 00:36:01.107842 systemd[1]: Started sshd@1-10.0.0.111:22-10.0.0.1:40388.service. Jul 12 00:36:01.150909 sshd[1419]: Accepted publickey for core from 10.0.0.1 port 40388 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:36:01.152495 sshd[1419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:36:01.156471 systemd-logind[1309]: New session 2 of user core. Jul 12 00:36:01.157194 systemd[1]: Started session-2.scope. Jul 12 00:36:01.210062 sshd[1419]: pam_unix(sshd:session): session closed for user core Jul 12 00:36:01.212257 systemd[1]: Started sshd@2-10.0.0.111:22-10.0.0.1:40402.service. Jul 12 00:36:01.212791 systemd[1]: sshd@1-10.0.0.111:22-10.0.0.1:40388.service: Deactivated successfully. Jul 12 00:36:01.213763 systemd-logind[1309]: Session 2 logged out. Waiting for processes to exit. Jul 12 00:36:01.213766 systemd[1]: session-2.scope: Deactivated successfully. Jul 12 00:36:01.214806 systemd-logind[1309]: Removed session 2. Jul 12 00:36:01.255119 sshd[1424]: Accepted publickey for core from 10.0.0.1 port 40402 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:36:01.256504 sshd[1424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:36:01.259999 systemd-logind[1309]: New session 3 of user core. Jul 12 00:36:01.260356 systemd[1]: Started session-3.scope. Jul 12 00:36:01.310352 sshd[1424]: pam_unix(sshd:session): session closed for user core Jul 12 00:36:01.312377 systemd[1]: Started sshd@3-10.0.0.111:22-10.0.0.1:40408.service. Jul 12 00:36:01.312959 systemd[1]: sshd@2-10.0.0.111:22-10.0.0.1:40402.service: Deactivated successfully. Jul 12 00:36:01.313951 systemd-logind[1309]: Session 3 logged out. Waiting for processes to exit. Jul 12 00:36:01.313989 systemd[1]: session-3.scope: Deactivated successfully. Jul 12 00:36:01.314736 systemd-logind[1309]: Removed session 3. Jul 12 00:36:01.356372 sshd[1431]: Accepted publickey for core from 10.0.0.1 port 40408 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:36:01.357425 sshd[1431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:36:01.360388 systemd-logind[1309]: New session 4 of user core. Jul 12 00:36:01.361136 systemd[1]: Started session-4.scope. Jul 12 00:36:01.413387 sshd[1431]: pam_unix(sshd:session): session closed for user core Jul 12 00:36:01.415605 systemd[1]: Started sshd@4-10.0.0.111:22-10.0.0.1:40422.service. Jul 12 00:36:01.416641 systemd[1]: sshd@3-10.0.0.111:22-10.0.0.1:40408.service: Deactivated successfully. Jul 12 00:36:01.417754 systemd-logind[1309]: Session 4 logged out. Waiting for processes to exit. Jul 12 00:36:01.417985 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 00:36:01.418863 systemd-logind[1309]: Removed session 4. Jul 12 00:36:01.460005 sshd[1438]: Accepted publickey for core from 10.0.0.1 port 40422 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:36:01.461019 sshd[1438]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:36:01.463924 systemd-logind[1309]: New session 5 of user core. Jul 12 00:36:01.465910 systemd[1]: Started session-5.scope. Jul 12 00:36:01.533831 sudo[1444]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 12 00:36:01.534045 sudo[1444]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 12 00:36:01.543551 dbus-daemon[1295]: avc: received setenforce notice (enforcing=1) Jul 12 00:36:01.545004 sudo[1444]: pam_unix(sudo:session): session closed for user root Jul 12 00:36:01.546775 sshd[1438]: pam_unix(sshd:session): session closed for user core Jul 12 00:36:01.549096 systemd[1]: Started sshd@5-10.0.0.111:22-10.0.0.1:40424.service. Jul 12 00:36:01.550232 systemd[1]: sshd@4-10.0.0.111:22-10.0.0.1:40422.service: Deactivated successfully. Jul 12 00:36:01.551287 systemd-logind[1309]: Session 5 logged out. Waiting for processes to exit. Jul 12 00:36:01.551851 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 00:36:01.552587 systemd-logind[1309]: Removed session 5. Jul 12 00:36:01.592027 sshd[1446]: Accepted publickey for core from 10.0.0.1 port 40424 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:36:01.593217 sshd[1446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:36:01.596328 systemd-logind[1309]: New session 6 of user core. Jul 12 00:36:01.597911 systemd[1]: Started session-6.scope. Jul 12 00:36:01.651036 sudo[1453]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 12 00:36:01.651250 sudo[1453]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 12 00:36:01.654227 sudo[1453]: pam_unix(sudo:session): session closed for user root Jul 12 00:36:01.658454 sudo[1452]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 12 00:36:01.658668 sudo[1452]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 12 00:36:01.667131 systemd[1]: Stopping audit-rules.service... Jul 12 00:36:01.667000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 12 00:36:01.668654 auditctl[1456]: No rules Jul 12 00:36:01.668893 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 00:36:01.669115 systemd[1]: Stopped audit-rules.service. Jul 12 00:36:01.670638 systemd[1]: Starting audit-rules.service... Jul 12 00:36:01.670883 kernel: kauditd_printk_skb: 116 callbacks suppressed Jul 12 00:36:01.670936 kernel: audit: type=1305 audit(1752280561.667:148): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 12 00:36:01.670954 kernel: audit: type=1300 audit(1752280561.667:148): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc29302b0 a2=420 a3=0 items=0 ppid=1 pid=1456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:01.667000 audit[1456]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc29302b0 a2=420 a3=0 items=0 ppid=1 pid=1456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:01.674379 kernel: audit: type=1327 audit(1752280561.667:148): proctitle=2F7362696E2F617564697463746C002D44 Jul 12 00:36:01.667000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jul 12 00:36:01.675977 kernel: audit: type=1131 audit(1752280561.667:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:36:01.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:36:01.687217 augenrules[1474]: No rules Jul 12 00:36:01.688169 systemd[1]: Finished audit-rules.service. Jul 12 00:36:01.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:36:01.691170 sudo[1452]: pam_unix(sudo:session): session closed for user root Jul 12 00:36:01.691422 kernel: audit: type=1130 audit(1752280561.686:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:36:01.689000 audit[1452]: USER_END pid=1452 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 12 00:36:01.692576 sshd[1446]: pam_unix(sshd:session): session closed for user core Jul 12 00:36:01.694719 systemd[1]: Started sshd@6-10.0.0.111:22-10.0.0.1:40434.service. Jul 12 00:36:01.689000 audit[1452]: CRED_DISP pid=1452 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 12 00:36:01.695686 systemd[1]: sshd@5-10.0.0.111:22-10.0.0.1:40424.service: Deactivated successfully. Jul 12 00:36:01.696726 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 00:36:01.697025 systemd-logind[1309]: Session 6 logged out. Waiting for processes to exit. Jul 12 00:36:01.697826 kernel: audit: type=1106 audit(1752280561.689:151): pid=1452 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 12 00:36:01.697886 kernel: audit: type=1104 audit(1752280561.689:152): pid=1452 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 12 00:36:01.697905 kernel: audit: type=1106 audit(1752280561.692:153): pid=1446 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:36:01.692000 audit[1446]: USER_END pid=1446 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:36:01.697997 systemd-logind[1309]: Removed session 6. Jul 12 00:36:01.694000 audit[1446]: CRED_DISP pid=1446 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:36:01.703996 kernel: audit: type=1104 audit(1752280561.694:154): pid=1446 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:36:01.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.111:22-10.0.0.1:40434 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:36:01.706925 kernel: audit: type=1130 audit(1752280561.694:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.111:22-10.0.0.1:40434 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:36:01.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.111:22-10.0.0.1:40424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:36:01.738000 audit[1480]: USER_ACCT pid=1480 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:36:01.740239 sshd[1480]: Accepted publickey for core from 10.0.0.1 port 40434 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:36:01.739000 audit[1480]: CRED_ACQ pid=1480 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:36:01.739000 audit[1480]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffffe3ab70 a2=3 a3=1 items=0 ppid=1 pid=1480 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:01.739000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:36:01.741430 sshd[1480]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:36:01.745514 systemd[1]: Started session-7.scope. Jul 12 00:36:01.745692 systemd-logind[1309]: New session 7 of user core. Jul 12 00:36:01.747000 audit[1480]: USER_START pid=1480 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:36:01.749000 audit[1485]: CRED_ACQ pid=1485 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:36:01.796000 audit[1486]: USER_ACCT pid=1486 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 12 00:36:01.797953 sudo[1486]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 00:36:01.796000 audit[1486]: CRED_REFR pid=1486 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 12 00:36:01.798182 sudo[1486]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 12 00:36:01.798000 audit[1486]: USER_START pid=1486 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 12 00:36:01.866366 systemd[1]: Starting docker.service... Jul 12 00:36:01.949767 env[1497]: time="2025-07-12T00:36:01.949644386Z" level=info msg="Starting up" Jul 12 00:36:01.951747 env[1497]: time="2025-07-12T00:36:01.951721890Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 12 00:36:01.951747 env[1497]: time="2025-07-12T00:36:01.951742840Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 12 00:36:01.951858 env[1497]: time="2025-07-12T00:36:01.951769768Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 12 00:36:01.951858 env[1497]: time="2025-07-12T00:36:01.951781158Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 12 00:36:01.956946 env[1497]: time="2025-07-12T00:36:01.956909143Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 12 00:36:01.956946 env[1497]: time="2025-07-12T00:36:01.956936194Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 12 00:36:01.957048 env[1497]: time="2025-07-12T00:36:01.956954621Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 12 00:36:01.957048 env[1497]: time="2025-07-12T00:36:01.956964953Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 12 00:36:02.142064 env[1497]: time="2025-07-12T00:36:02.142007637Z" level=warning msg="Your kernel does not support cgroup blkio weight" Jul 12 00:36:02.142064 env[1497]: time="2025-07-12T00:36:02.142040112Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Jul 12 00:36:02.142272 env[1497]: time="2025-07-12T00:36:02.142185436Z" level=info msg="Loading containers: start." Jul 12 00:36:02.191000 audit[1531]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1531 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:02.191000 audit[1531]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffde74a7a0 a2=0 a3=1 items=0 ppid=1497 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:02.191000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jul 12 00:36:02.192000 audit[1533]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:02.192000 audit[1533]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe323c3a0 a2=0 a3=1 items=0 ppid=1497 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:02.192000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jul 12 00:36:02.194000 audit[1535]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1535 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:02.194000 audit[1535]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd126bf50 a2=0 a3=1 items=0 ppid=1497 pid=1535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:02.194000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 12 00:36:02.196000 audit[1537]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1537 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:02.196000 audit[1537]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff8bdac70 a2=0 a3=1 items=0 ppid=1497 pid=1537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:02.196000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 12 00:36:02.199000 audit[1539]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1539 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:02.199000 audit[1539]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffdb42da80 a2=0 a3=1 items=0 ppid=1497 pid=1539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:02.199000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jul 12 00:36:02.227000 audit[1544]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1544 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:02.227000 audit[1544]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffff91b640 a2=0 a3=1 items=0 ppid=1497 pid=1544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:02.227000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jul 12 00:36:02.234000 audit[1546]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1546 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:02.234000 audit[1546]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdea51230 a2=0 a3=1 items=0 ppid=1497 pid=1546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:02.234000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jul 12 00:36:02.236000 audit[1548]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1548 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:02.236000 audit[1548]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffe0cf0030 a2=0 a3=1 items=0 ppid=1497 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:02.236000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jul 12 00:36:02.237000 audit[1550]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1550 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:02.237000 audit[1550]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffd8594070 a2=0 a3=1 items=0 ppid=1497 pid=1550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:02.237000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 12 00:36:02.244000 audit[1554]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1554 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:02.244000 audit[1554]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=fffff62ac1c0 a2=0 a3=1 items=0 ppid=1497 pid=1554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:02.244000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 12 00:36:02.260000 audit[1555]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1555 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:02.260000 audit[1555]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffeebe5610 a2=0 a3=1 items=0 ppid=1497 pid=1555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:02.260000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 12 00:36:02.271418 kernel: Initializing XFRM netlink socket Jul 12 00:36:02.298385 env[1497]: time="2025-07-12T00:36:02.298352825Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 12 00:36:02.314000 audit[1563]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1563 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:02.314000 audit[1563]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=ffffc1f2ae80 a2=0 a3=1 items=0 ppid=1497 pid=1563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:02.314000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jul 12 00:36:02.332000 audit[1566]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1566 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:02.332000 audit[1566]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffd6cb3b70 a2=0 a3=1 items=0 ppid=1497 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:02.332000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jul 12 00:36:02.336000 audit[1569]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1569 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:02.336000 audit[1569]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffe893c980 a2=0 a3=1 items=0 ppid=1497 pid=1569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:02.336000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jul 12 00:36:02.338000 audit[1571]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:02.338000 audit[1571]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffd1335930 a2=0 a3=1 items=0 ppid=1497 pid=1571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:02.338000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jul 12 00:36:02.340000 audit[1573]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1573 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:02.340000 audit[1573]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffea7b6520 a2=0 a3=1 items=0 ppid=1497 pid=1573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:02.340000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jul 12 00:36:02.341000 audit[1575]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1575 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:02.341000 audit[1575]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffc91ad420 a2=0 a3=1 items=0 ppid=1497 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:02.341000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jul 12 00:36:02.343000 audit[1577]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1577 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:02.343000 audit[1577]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=ffffe6f79c30 a2=0 a3=1 items=0 ppid=1497 pid=1577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:02.343000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jul 12 00:36:02.350000 audit[1580]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:02.350000 audit[1580]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffdcf35a90 a2=0 a3=1 items=0 ppid=1497 pid=1580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:02.350000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jul 12 00:36:02.352000 audit[1582]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:02.352000 audit[1582]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffc1c574f0 a2=0 a3=1 items=0 ppid=1497 pid=1582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:02.352000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 12 00:36:02.353000 audit[1584]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:02.353000 audit[1584]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffe68bbf70 a2=0 a3=1 items=0 ppid=1497 pid=1584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:02.353000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 12 00:36:02.355000 audit[1586]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1586 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:02.355000 audit[1586]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffcee46120 a2=0 a3=1 items=0 ppid=1497 pid=1586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:02.355000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jul 12 00:36:02.356303 systemd-networkd[1098]: docker0: Link UP Jul 12 00:36:02.362000 audit[1590]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1590 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:02.362000 audit[1590]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc2dc5b90 a2=0 a3=1 items=0 ppid=1497 pid=1590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:02.362000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 12 00:36:02.376000 audit[1591]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1591 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:02.376000 audit[1591]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffc8c49210 a2=0 a3=1 items=0 ppid=1497 pid=1591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:02.376000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 12 00:36:02.378658 env[1497]: time="2025-07-12T00:36:02.378618060Z" level=info msg="Loading containers: done." Jul 12 00:36:02.401429 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1961595444-merged.mount: Deactivated successfully. Jul 12 00:36:02.408313 env[1497]: time="2025-07-12T00:36:02.407943487Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 00:36:02.408494 env[1497]: time="2025-07-12T00:36:02.408472903Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 12 00:36:02.408590 env[1497]: time="2025-07-12T00:36:02.408573777Z" level=info msg="Daemon has completed initialization" Jul 12 00:36:02.427530 systemd[1]: Started docker.service. Jul 12 00:36:02.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:36:02.429546 env[1497]: time="2025-07-12T00:36:02.429503920Z" level=info msg="API listen on /run/docker.sock" Jul 12 00:36:03.047425 env[1324]: time="2025-07-12T00:36:03.047363827Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 12 00:36:03.574962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3885375092.mount: Deactivated successfully. Jul 12 00:36:04.825613 env[1324]: time="2025-07-12T00:36:04.825552022Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:04.829155 env[1324]: time="2025-07-12T00:36:04.829113769Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:04.830814 env[1324]: time="2025-07-12T00:36:04.830784769Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:04.833483 env[1324]: time="2025-07-12T00:36:04.833451873Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:04.833975 env[1324]: time="2025-07-12T00:36:04.833938739Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 12 00:36:04.838364 env[1324]: time="2025-07-12T00:36:04.838306414Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 12 00:36:06.175275 env[1324]: time="2025-07-12T00:36:06.175225607Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:06.176842 env[1324]: time="2025-07-12T00:36:06.176806996Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:06.178568 env[1324]: time="2025-07-12T00:36:06.178537550Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:06.181026 env[1324]: time="2025-07-12T00:36:06.180986536Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:06.181837 env[1324]: time="2025-07-12T00:36:06.181804223Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 12 00:36:06.182399 env[1324]: time="2025-07-12T00:36:06.182365581Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 12 00:36:07.383058 env[1324]: time="2025-07-12T00:36:07.383009946Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:07.384976 env[1324]: time="2025-07-12T00:36:07.384945477Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:07.387195 env[1324]: time="2025-07-12T00:36:07.387163542Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:07.389539 env[1324]: time="2025-07-12T00:36:07.389502519Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:07.390353 env[1324]: time="2025-07-12T00:36:07.390312151Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 12 00:36:07.391475 env[1324]: time="2025-07-12T00:36:07.391445265Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 12 00:36:08.399283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3626844487.mount: Deactivated successfully. Jul 12 00:36:08.967822 env[1324]: time="2025-07-12T00:36:08.967768159Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:08.969243 env[1324]: time="2025-07-12T00:36:08.969207797Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:08.970687 env[1324]: time="2025-07-12T00:36:08.970648201Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:08.972133 env[1324]: time="2025-07-12T00:36:08.972102537Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:08.972487 env[1324]: time="2025-07-12T00:36:08.972464691Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 12 00:36:08.972929 env[1324]: time="2025-07-12T00:36:08.972904720Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 12 00:36:09.387746 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 00:36:09.387946 systemd[1]: Stopped kubelet.service. Jul 12 00:36:09.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:36:09.388952 kernel: kauditd_printk_skb: 84 callbacks suppressed Jul 12 00:36:09.389030 kernel: audit: type=1130 audit(1752280569.386:190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:36:09.389759 systemd[1]: Starting kubelet.service... Jul 12 00:36:09.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:36:09.394220 kernel: audit: type=1131 audit(1752280569.386:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:36:09.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:36:09.488779 systemd[1]: Started kubelet.service. Jul 12 00:36:09.492426 kernel: audit: type=1130 audit(1752280569.488:192): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:36:09.539356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3062017778.mount: Deactivated successfully. Jul 12 00:36:09.553325 kubelet[1638]: E0712 00:36:09.553275 1638 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:36:09.555782 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:36:09.555938 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:36:09.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 12 00:36:09.559420 kernel: audit: type=1131 audit(1752280569.554:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 12 00:36:10.411897 env[1324]: time="2025-07-12T00:36:10.411846320Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:10.413826 env[1324]: time="2025-07-12T00:36:10.413797572Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:10.416660 env[1324]: time="2025-07-12T00:36:10.416616625Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:10.419620 env[1324]: time="2025-07-12T00:36:10.419593759Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:10.420448 env[1324]: time="2025-07-12T00:36:10.420417455Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 12 00:36:10.420908 env[1324]: time="2025-07-12T00:36:10.420883538Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 00:36:10.847919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1651648826.mount: Deactivated successfully. Jul 12 00:36:10.850798 env[1324]: time="2025-07-12T00:36:10.850753962Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:10.852161 env[1324]: time="2025-07-12T00:36:10.852121576Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:10.853616 env[1324]: time="2025-07-12T00:36:10.853584070Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:10.854903 env[1324]: time="2025-07-12T00:36:10.854875337Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:10.855503 env[1324]: time="2025-07-12T00:36:10.855477269Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 12 00:36:10.855929 env[1324]: time="2025-07-12T00:36:10.855899690Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 12 00:36:11.335098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1521691001.mount: Deactivated successfully. Jul 12 00:36:13.103623 env[1324]: time="2025-07-12T00:36:13.103554996Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:13.105140 env[1324]: time="2025-07-12T00:36:13.105111732Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:13.106992 env[1324]: time="2025-07-12T00:36:13.106966884Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:13.108987 env[1324]: time="2025-07-12T00:36:13.108959985Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:13.110751 env[1324]: time="2025-07-12T00:36:13.110721177Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 12 00:36:19.185639 systemd[1]: Stopped kubelet.service. Jul 12 00:36:19.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:36:19.187786 systemd[1]: Starting kubelet.service... Jul 12 00:36:19.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:36:19.191101 kernel: audit: type=1130 audit(1752280579.184:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:36:19.191195 kernel: audit: type=1131 audit(1752280579.184:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:36:19.209115 systemd[1]: Reloading. Jul 12 00:36:19.255093 /usr/lib/systemd/system-generators/torcx-generator[1696]: time="2025-07-12T00:36:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 12 00:36:19.255128 /usr/lib/systemd/system-generators/torcx-generator[1696]: time="2025-07-12T00:36:19Z" level=info msg="torcx already run" Jul 12 00:36:19.354678 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:36:19.354696 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:36:19.370062 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:36:19.430848 systemd[1]: Started kubelet.service. Jul 12 00:36:19.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:36:19.434438 kernel: audit: type=1130 audit(1752280579.430:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:36:19.434819 systemd[1]: Stopping kubelet.service... Jul 12 00:36:19.435280 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:36:19.435547 systemd[1]: Stopped kubelet.service. Jul 12 00:36:19.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:36:19.437546 systemd[1]: Starting kubelet.service... Jul 12 00:36:19.438405 kernel: audit: type=1131 audit(1752280579.435:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:36:19.528788 systemd[1]: Started kubelet.service. Jul 12 00:36:19.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:36:19.532401 kernel: audit: type=1130 audit(1752280579.528:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:36:19.571286 kubelet[1755]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:36:19.571286 kubelet[1755]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 12 00:36:19.571286 kubelet[1755]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:36:19.571684 kubelet[1755]: I0712 00:36:19.571327 1755 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:36:20.542763 kubelet[1755]: I0712 00:36:20.542721 1755 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 12 00:36:20.542763 kubelet[1755]: I0712 00:36:20.542752 1755 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:36:20.543005 kubelet[1755]: I0712 00:36:20.542978 1755 server.go:934] "Client rotation is on, will bootstrap in background" Jul 12 00:36:20.583496 kubelet[1755]: E0712 00:36:20.583460 1755 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.111:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:36:20.584501 kubelet[1755]: I0712 00:36:20.584481 1755 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:36:20.594131 kubelet[1755]: E0712 00:36:20.594095 1755 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:36:20.594131 kubelet[1755]: I0712 00:36:20.594133 1755 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:36:20.597622 kubelet[1755]: I0712 00:36:20.597602 1755 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:36:20.598874 kubelet[1755]: I0712 00:36:20.598842 1755 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 12 00:36:20.599034 kubelet[1755]: I0712 00:36:20.599006 1755 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:36:20.599189 kubelet[1755]: I0712 00:36:20.599036 1755 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 12 00:36:20.599280 kubelet[1755]: I0712 00:36:20.599267 1755 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:36:20.599280 kubelet[1755]: I0712 00:36:20.599277 1755 container_manager_linux.go:300] "Creating device plugin manager" Jul 12 00:36:20.599538 kubelet[1755]: I0712 00:36:20.599524 1755 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:36:20.607677 kubelet[1755]: I0712 00:36:20.607650 1755 kubelet.go:408] "Attempting to sync node with API server" Jul 12 00:36:20.607677 kubelet[1755]: I0712 00:36:20.607682 1755 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:36:20.607783 kubelet[1755]: I0712 00:36:20.607708 1755 kubelet.go:314] "Adding apiserver pod source" Jul 12 00:36:20.607783 kubelet[1755]: I0712 00:36:20.607720 1755 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:36:20.612021 kubelet[1755]: W0712 00:36:20.611965 1755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Jul 12 00:36:20.612098 kubelet[1755]: E0712 00:36:20.612031 1755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:36:20.623984 kubelet[1755]: W0712 00:36:20.623930 1755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.111:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Jul 12 00:36:20.624062 kubelet[1755]: E0712 00:36:20.623988 1755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.111:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:36:20.625653 kubelet[1755]: I0712 00:36:20.625625 1755 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 12 00:36:20.626357 kubelet[1755]: I0712 00:36:20.626324 1755 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:36:20.626594 kubelet[1755]: W0712 00:36:20.626520 1755 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 00:36:20.627461 kubelet[1755]: I0712 00:36:20.627438 1755 server.go:1274] "Started kubelet" Jul 12 00:36:20.628259 kubelet[1755]: I0712 00:36:20.627686 1755 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:36:20.627000 audit[1755]: AVC avc: denied { mac_admin } for pid=1755 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:36:20.629236 kubelet[1755]: I0712 00:36:20.629046 1755 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Jul 12 00:36:20.629236 kubelet[1755]: I0712 00:36:20.629083 1755 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Jul 12 00:36:20.629236 kubelet[1755]: I0712 00:36:20.629139 1755 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:36:20.630311 kubelet[1755]: I0712 00:36:20.630287 1755 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:36:20.627000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 12 00:36:20.632047 kubelet[1755]: I0712 00:36:20.632027 1755 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 12 00:36:20.632234 kubelet[1755]: I0712 00:36:20.632217 1755 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 12 00:36:20.632361 kubelet[1755]: I0712 00:36:20.632350 1755 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:36:20.632666 kubelet[1755]: I0712 00:36:20.632630 1755 server.go:449] "Adding debug handlers to kubelet server" Jul 12 00:36:20.632870 kernel: audit: type=1400 audit(1752280580.627:199): avc: denied { mac_admin } for pid=1755 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:36:20.632925 kernel: audit: type=1401 audit(1752280580.627:199): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 12 00:36:20.632941 kernel: audit: type=1300 audit(1752280580.627:199): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000894e40 a1=4000bb8510 a2=4000894e10 a3=25 items=0 ppid=1 pid=1755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:20.627000 audit[1755]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000894e40 a1=4000bb8510 a2=4000894e10 a3=25 items=0 ppid=1 pid=1755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:20.633107 kubelet[1755]: W0712 00:36:20.633071 1755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Jul 12 00:36:20.633206 kubelet[1755]: E0712 00:36:20.633180 1755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:36:20.633437 kubelet[1755]: I0712 00:36:20.627855 1755 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:36:20.633578 kubelet[1755]: I0712 00:36:20.633549 1755 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:36:20.634255 kubelet[1755]: E0712 00:36:20.634171 1755 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:36:20.634487 kubelet[1755]: E0712 00:36:20.634451 1755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="200ms" Jul 12 00:36:20.634761 kubelet[1755]: I0712 00:36:20.634642 1755 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:36:20.634814 kubelet[1755]: I0712 00:36:20.634773 1755 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:36:20.635744 kubelet[1755]: E0712 00:36:20.635725 1755 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:36:20.636270 kubelet[1755]: E0712 00:36:20.634803 1755 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.111:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.111:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185159e7731ccda3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-12 00:36:20.627418531 +0000 UTC m=+1.095155330,LastTimestamp:2025-07-12 00:36:20.627418531 +0000 UTC m=+1.095155330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 12 00:36:20.627000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 12 00:36:20.639630 kernel: audit: type=1327 audit(1752280580.627:199): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 12 00:36:20.639696 kernel: audit: type=1400 audit(1752280580.627:200): avc: denied { mac_admin } for pid=1755 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:36:20.627000 audit[1755]: AVC avc: denied { mac_admin } for pid=1755 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:36:20.641534 kubelet[1755]: I0712 00:36:20.641508 1755 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:36:20.627000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 12 00:36:20.627000 audit[1755]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40009cc760 a1=4000bb8528 a2=4000894ed0 a3=25 items=0 ppid=1 pid=1755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:20.627000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 12 00:36:20.637000 audit[1769]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1769 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:20.637000 audit[1769]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd3f4a7f0 a2=0 a3=1 items=0 ppid=1755 pid=1769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:20.637000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 12 00:36:20.645000 audit[1770]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1770 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:20.645000 audit[1770]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd7f171d0 a2=0 a3=1 items=0 ppid=1755 pid=1770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:20.645000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 12 00:36:20.647000 audit[1774]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1774 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:20.647000 audit[1774]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe465ee10 a2=0 a3=1 items=0 ppid=1755 pid=1774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:20.647000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 12 00:36:20.649000 audit[1776]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1776 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:20.649000 audit[1776]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffdc8ea890 a2=0 a3=1 items=0 ppid=1755 pid=1776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:20.649000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 12 00:36:20.657000 audit[1781]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1781 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:20.657000 audit[1781]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffff9ab17f0 a2=0 a3=1 items=0 ppid=1755 pid=1781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:20.657000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jul 12 00:36:20.658675 kubelet[1755]: I0712 00:36:20.658640 1755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:36:20.659000 audit[1783]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1783 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:36:20.659000 audit[1783]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffda63ef50 a2=0 a3=1 items=0 ppid=1755 pid=1783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:20.659000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 12 00:36:20.659829 kubelet[1755]: I0712 00:36:20.659770 1755 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 12 00:36:20.659829 kubelet[1755]: I0712 00:36:20.659788 1755 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 12 00:36:20.659890 kubelet[1755]: I0712 00:36:20.659828 1755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:36:20.659890 kubelet[1755]: I0712 00:36:20.659847 1755 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 12 00:36:20.659890 kubelet[1755]: I0712 00:36:20.659850 1755 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:36:20.659890 kubelet[1755]: I0712 00:36:20.659861 1755 kubelet.go:2321] "Starting kubelet main sync loop" Jul 12 00:36:20.659968 kubelet[1755]: E0712 00:36:20.659901 1755 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:36:20.660676 kubelet[1755]: W0712 00:36:20.660630 1755 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.111:6443: connect: connection refused Jul 12 00:36:20.660000 audit[1784]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1784 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:20.660000 audit[1784]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd0598f80 a2=0 a3=1 items=0 ppid=1755 pid=1784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:20.660000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 12 00:36:20.660000 audit[1785]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=1785 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:36:20.660000 audit[1785]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcfaa4460 a2=0 a3=1 items=0 ppid=1755 pid=1785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:20.660000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 12 00:36:20.660996 kubelet[1755]: E0712 00:36:20.660972 1755 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:36:20.661000 audit[1787]: NETFILTER_CFG table=nat:34 family=10 entries=2 op=nft_register_chain pid=1787 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:36:20.661000 audit[1787]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffd4f21d10 a2=0 a3=1 items=0 ppid=1755 pid=1787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:20.661000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 12 00:36:20.661000 audit[1786]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_chain pid=1786 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:20.661000 audit[1786]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdda9ad40 a2=0 a3=1 items=0 ppid=1755 pid=1786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:20.661000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 12 00:36:20.662000 audit[1788]: NETFILTER_CFG table=filter:36 family=10 entries=2 op=nft_register_chain pid=1788 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:36:20.662000 audit[1788]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd0d9dcd0 a2=0 a3=1 items=0 ppid=1755 pid=1788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:20.662000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 12 00:36:20.662000 audit[1789]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_chain pid=1789 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:20.662000 audit[1789]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcad12970 a2=0 a3=1 items=0 ppid=1755 pid=1789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:20.662000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 12 00:36:20.731072 kubelet[1755]: I0712 00:36:20.731024 1755 policy_none.go:49] "None policy: Start" Jul 12 00:36:20.731875 kubelet[1755]: I0712 00:36:20.731857 1755 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 12 00:36:20.731918 kubelet[1755]: I0712 00:36:20.731889 1755 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:36:20.734361 kubelet[1755]: E0712 00:36:20.734336 1755 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:36:20.737133 kubelet[1755]: I0712 00:36:20.737107 1755 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:36:20.736000 audit[1755]: AVC avc: denied { mac_admin } for pid=1755 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:36:20.736000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 12 00:36:20.736000 audit[1755]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000edede0 a1=4000e91998 a2=4000ededb0 a3=25 items=0 ppid=1 pid=1755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:20.736000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 12 00:36:20.737314 kubelet[1755]: I0712 00:36:20.737169 1755 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Jul 12 00:36:20.737314 kubelet[1755]: I0712 00:36:20.737272 1755 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:36:20.737314 kubelet[1755]: I0712 00:36:20.737283 1755 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:36:20.737711 kubelet[1755]: I0712 00:36:20.737673 1755 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:36:20.738950 kubelet[1755]: E0712 00:36:20.738928 1755 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 12 00:36:20.835670 kubelet[1755]: E0712 00:36:20.835556 1755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="400ms" Jul 12 00:36:20.839364 kubelet[1755]: I0712 00:36:20.839257 1755 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:36:20.839799 kubelet[1755]: E0712 00:36:20.839757 1755 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" Jul 12 00:36:20.934160 kubelet[1755]: I0712 00:36:20.934113 1755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:36:20.934160 kubelet[1755]: I0712 00:36:20.934160 1755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:36:20.934282 kubelet[1755]: I0712 00:36:20.934181 1755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 12 00:36:20.934282 kubelet[1755]: I0712 00:36:20.934196 1755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1dc4853c1487ffdb4db23ce77eca59b6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1dc4853c1487ffdb4db23ce77eca59b6\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:36:20.934282 kubelet[1755]: I0712 00:36:20.934214 1755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1dc4853c1487ffdb4db23ce77eca59b6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1dc4853c1487ffdb4db23ce77eca59b6\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:36:20.934282 kubelet[1755]: I0712 00:36:20.934231 1755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:36:20.934282 kubelet[1755]: I0712 00:36:20.934246 1755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1dc4853c1487ffdb4db23ce77eca59b6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1dc4853c1487ffdb4db23ce77eca59b6\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:36:20.934438 kubelet[1755]: I0712 00:36:20.934261 1755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:36:20.934438 kubelet[1755]: I0712 00:36:20.934279 1755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:36:21.041175 kubelet[1755]: I0712 00:36:21.041148 1755 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:36:21.041523 kubelet[1755]: E0712 00:36:21.041498 1755 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" Jul 12 00:36:21.067860 kubelet[1755]: E0712 00:36:21.067828 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:21.068037 kubelet[1755]: E0712 00:36:21.068017 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:21.068703 env[1324]: time="2025-07-12T00:36:21.068418597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 12 00:36:21.069440 env[1324]: time="2025-07-12T00:36:21.069404793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 12 00:36:21.069512 kubelet[1755]: E0712 00:36:21.069444 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:21.069808 env[1324]: time="2025-07-12T00:36:21.069758927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1dc4853c1487ffdb4db23ce77eca59b6,Namespace:kube-system,Attempt:0,}" Jul 12 00:36:21.236473 kubelet[1755]: E0712 00:36:21.236348 1755 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="800ms" Jul 12 00:36:21.442688 kubelet[1755]: I0712 00:36:21.442660 1755 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:36:21.443032 kubelet[1755]: E0712 00:36:21.443005 1755 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" Jul 12 00:36:21.515879 kubelet[1755]: E0712 00:36:21.515705 1755 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.111:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.111:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185159e7731ccda3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-12 00:36:20.627418531 +0000 UTC m=+1.095155330,LastTimestamp:2025-07-12 00:36:20.627418531 +0000 UTC m=+1.095155330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 12 00:36:21.579130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3993676987.mount: Deactivated successfully. Jul 12 00:36:21.583472 env[1324]: time="2025-07-12T00:36:21.583426797Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:21.585253 env[1324]: time="2025-07-12T00:36:21.585214572Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:21.586191 env[1324]: time="2025-07-12T00:36:21.586161441Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:21.587307 env[1324]: time="2025-07-12T00:36:21.587272262Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:21.588422 env[1324]: time="2025-07-12T00:36:21.588379279Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:21.589686 env[1324]: time="2025-07-12T00:36:21.589650329Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:21.590278 env[1324]: time="2025-07-12T00:36:21.590252995Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:21.591026 env[1324]: time="2025-07-12T00:36:21.590999429Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:21.594138 env[1324]: time="2025-07-12T00:36:21.594111515Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:21.594923 env[1324]: time="2025-07-12T00:36:21.594889827Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:21.597585 env[1324]: time="2025-07-12T00:36:21.597558915Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:21.599065 env[1324]: time="2025-07-12T00:36:21.599036005Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:21.629492 env[1324]: time="2025-07-12T00:36:21.629431137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:36:21.629634 env[1324]: time="2025-07-12T00:36:21.629462934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:36:21.629634 env[1324]: time="2025-07-12T00:36:21.629480715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:36:21.629826 env[1324]: time="2025-07-12T00:36:21.629778704Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/851102fd574109159ffe489be558fdc07cff12e67b6444246bb50cda7f6c85b5 pid=1812 runtime=io.containerd.runc.v2 Jul 12 00:36:21.629937 env[1324]: time="2025-07-12T00:36:21.629422687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:36:21.629937 env[1324]: time="2025-07-12T00:36:21.629463014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:36:21.629937 env[1324]: time="2025-07-12T00:36:21.629472866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:36:21.630061 env[1324]: time="2025-07-12T00:36:21.629737136Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/791aa5007a72e6215605e814f338f2ccdbf0832748652df0781e7213981a63ab pid=1813 runtime=io.containerd.runc.v2 Jul 12 00:36:21.630437 env[1324]: time="2025-07-12T00:36:21.629737496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:36:21.630437 env[1324]: time="2025-07-12T00:36:21.629780867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:36:21.630437 env[1324]: time="2025-07-12T00:36:21.629793802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:36:21.630437 env[1324]: time="2025-07-12T00:36:21.630089548Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1f22107f81b6c5d157846dc02d60aac5caf89d5dd4bf0651f37608ebd9969f2 pid=1811 runtime=io.containerd.runc.v2 Jul 12 00:36:21.702744 env[1324]: time="2025-07-12T00:36:21.702706269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1dc4853c1487ffdb4db23ce77eca59b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1f22107f81b6c5d157846dc02d60aac5caf89d5dd4bf0651f37608ebd9969f2\"" Jul 12 00:36:21.704591 kubelet[1755]: E0712 00:36:21.704317 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:21.706449 env[1324]: time="2025-07-12T00:36:21.706416375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"851102fd574109159ffe489be558fdc07cff12e67b6444246bb50cda7f6c85b5\"" Jul 12 00:36:21.706554 env[1324]: time="2025-07-12T00:36:21.706455742Z" level=info msg="CreateContainer within sandbox \"e1f22107f81b6c5d157846dc02d60aac5caf89d5dd4bf0651f37608ebd9969f2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 00:36:21.707224 kubelet[1755]: E0712 00:36:21.707088 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:21.710120 env[1324]: time="2025-07-12T00:36:21.710085634Z" level=info msg="CreateContainer within sandbox \"851102fd574109159ffe489be558fdc07cff12e67b6444246bb50cda7f6c85b5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 00:36:21.719319 env[1324]: time="2025-07-12T00:36:21.719275321Z" level=info msg="CreateContainer within sandbox \"e1f22107f81b6c5d157846dc02d60aac5caf89d5dd4bf0651f37608ebd9969f2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"842e0e055524f3600df6b26ee90a059ca14ac66b94994e22aaf9a442fa4ff972\"" Jul 12 00:36:21.719887 env[1324]: time="2025-07-12T00:36:21.719857964Z" level=info msg="StartContainer for \"842e0e055524f3600df6b26ee90a059ca14ac66b94994e22aaf9a442fa4ff972\"" Jul 12 00:36:21.719956 env[1324]: time="2025-07-12T00:36:21.719897370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"791aa5007a72e6215605e814f338f2ccdbf0832748652df0781e7213981a63ab\"" Jul 12 00:36:21.720364 kubelet[1755]: E0712 00:36:21.720343 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:21.721698 env[1324]: time="2025-07-12T00:36:21.721669206Z" level=info msg="CreateContainer within sandbox \"851102fd574109159ffe489be558fdc07cff12e67b6444246bb50cda7f6c85b5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1ea27510510eac9be357b8f076031a7046b8f26aefd84b08708d95ad5926e52c\"" Jul 12 00:36:21.721874 env[1324]: time="2025-07-12T00:36:21.721693635Z" level=info msg="CreateContainer within sandbox \"791aa5007a72e6215605e814f338f2ccdbf0832748652df0781e7213981a63ab\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 00:36:21.722152 env[1324]: time="2025-07-12T00:36:21.722116931Z" level=info msg="StartContainer for \"1ea27510510eac9be357b8f076031a7046b8f26aefd84b08708d95ad5926e52c\"" Jul 12 00:36:21.731478 env[1324]: time="2025-07-12T00:36:21.731435529Z" level=info msg="CreateContainer within sandbox \"791aa5007a72e6215605e814f338f2ccdbf0832748652df0781e7213981a63ab\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0d767268a875b3761767485d02f07ceaec34c616f1b301781529674ce9dbcd34\"" Jul 12 00:36:21.731952 env[1324]: time="2025-07-12T00:36:21.731924261Z" level=info msg="StartContainer for \"0d767268a875b3761767485d02f07ceaec34c616f1b301781529674ce9dbcd34\"" Jul 12 00:36:21.802848 env[1324]: time="2025-07-12T00:36:21.802741193Z" level=info msg="StartContainer for \"1ea27510510eac9be357b8f076031a7046b8f26aefd84b08708d95ad5926e52c\" returns successfully" Jul 12 00:36:21.824750 env[1324]: time="2025-07-12T00:36:21.824694193Z" level=info msg="StartContainer for \"0d767268a875b3761767485d02f07ceaec34c616f1b301781529674ce9dbcd34\" returns successfully" Jul 12 00:36:21.825406 env[1324]: time="2025-07-12T00:36:21.825360414Z" level=info msg="StartContainer for \"842e0e055524f3600df6b26ee90a059ca14ac66b94994e22aaf9a442fa4ff972\" returns successfully" Jul 12 00:36:22.244947 kubelet[1755]: I0712 00:36:22.244849 1755 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:36:22.668757 kubelet[1755]: E0712 00:36:22.668640 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:22.670377 kubelet[1755]: E0712 00:36:22.670282 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:22.672003 kubelet[1755]: E0712 00:36:22.671985 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:23.176280 kubelet[1755]: E0712 00:36:23.176243 1755 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 12 00:36:23.201238 kubelet[1755]: I0712 00:36:23.201204 1755 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 12 00:36:23.609461 kubelet[1755]: I0712 00:36:23.609352 1755 apiserver.go:52] "Watching apiserver" Jul 12 00:36:23.633417 kubelet[1755]: I0712 00:36:23.633392 1755 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 12 00:36:23.678069 kubelet[1755]: E0712 00:36:23.678028 1755 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 12 00:36:23.678212 kubelet[1755]: E0712 00:36:23.678199 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:23.678324 kubelet[1755]: E0712 00:36:23.678288 1755 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 12 00:36:23.678487 kubelet[1755]: E0712 00:36:23.678469 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:25.000276 systemd[1]: Reloading. Jul 12 00:36:25.040409 /usr/lib/systemd/system-generators/torcx-generator[2050]: time="2025-07-12T00:36:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 12 00:36:25.040439 /usr/lib/systemd/system-generators/torcx-generator[2050]: time="2025-07-12T00:36:25Z" level=info msg="torcx already run" Jul 12 00:36:25.105271 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:36:25.105295 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:36:25.121539 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:36:25.199309 systemd[1]: Stopping kubelet.service... Jul 12 00:36:25.225766 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:36:25.226069 systemd[1]: Stopped kubelet.service. Jul 12 00:36:25.228003 kernel: kauditd_printk_skb: 43 callbacks suppressed Jul 12 00:36:25.228074 kernel: audit: type=1131 audit(1752280585.224:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:36:25.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:36:25.227823 systemd[1]: Starting kubelet.service... Jul 12 00:36:25.320534 systemd[1]: Started kubelet.service. Jul 12 00:36:25.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:36:25.324444 kernel: audit: type=1130 audit(1752280585.319:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:36:25.358355 kubelet[2103]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:36:25.358722 kubelet[2103]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 12 00:36:25.358770 kubelet[2103]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:36:25.358911 kubelet[2103]: I0712 00:36:25.358866 2103 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:36:25.365984 kubelet[2103]: I0712 00:36:25.365943 2103 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 12 00:36:25.365984 kubelet[2103]: I0712 00:36:25.365976 2103 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:36:25.366204 kubelet[2103]: I0712 00:36:25.366177 2103 server.go:934] "Client rotation is on, will bootstrap in background" Jul 12 00:36:25.367611 kubelet[2103]: I0712 00:36:25.367587 2103 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 12 00:36:25.369717 kubelet[2103]: I0712 00:36:25.369697 2103 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:36:25.372578 kubelet[2103]: E0712 00:36:25.372546 2103 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:36:25.372578 kubelet[2103]: I0712 00:36:25.372577 2103 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:36:25.375150 kubelet[2103]: I0712 00:36:25.375120 2103 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:36:25.375505 kubelet[2103]: I0712 00:36:25.375485 2103 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 12 00:36:25.375625 kubelet[2103]: I0712 00:36:25.375590 2103 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:36:25.375865 kubelet[2103]: I0712 00:36:25.375621 2103 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 12 00:36:25.375865 kubelet[2103]: I0712 00:36:25.375867 2103 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:36:25.375984 kubelet[2103]: I0712 00:36:25.375877 2103 container_manager_linux.go:300] "Creating device plugin manager" Jul 12 00:36:25.375984 kubelet[2103]: I0712 00:36:25.375924 2103 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:36:25.376040 kubelet[2103]: I0712 00:36:25.376008 2103 kubelet.go:408] "Attempting to sync node with API server" Jul 12 00:36:25.376040 kubelet[2103]: I0712 00:36:25.376020 2103 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:36:25.376040 kubelet[2103]: I0712 00:36:25.376036 2103 kubelet.go:314] "Adding apiserver pod source" Jul 12 00:36:25.376103 kubelet[2103]: I0712 00:36:25.376054 2103 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:36:25.376920 kubelet[2103]: I0712 00:36:25.376883 2103 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 12 00:36:25.377507 kubelet[2103]: I0712 00:36:25.377488 2103 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:36:25.378857 kubelet[2103]: I0712 00:36:25.378835 2103 server.go:1274] "Started kubelet" Jul 12 00:36:25.379330 kubelet[2103]: I0712 00:36:25.379292 2103 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:36:25.379473 kubelet[2103]: I0712 00:36:25.379428 2103 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:36:25.380054 kubelet[2103]: I0712 00:36:25.379655 2103 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:36:25.394293 kernel: audit: type=1400 audit(1752280585.380:216): avc: denied { mac_admin } for pid=2103 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:36:25.394365 kernel: audit: type=1401 audit(1752280585.380:216): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 12 00:36:25.394399 kernel: audit: type=1300 audit(1752280585.380:216): arch=c00000b7 syscall=5 success=no exit=-22 a0=40006a9980 a1=4000046390 a2=40006a9950 a3=25 items=0 ppid=1 pid=2103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:25.394416 kernel: audit: type=1327 audit(1752280585.380:216): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 12 00:36:25.394445 kernel: audit: type=1400 audit(1752280585.380:217): avc: denied { mac_admin } for pid=2103 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:36:25.380000 audit[2103]: AVC avc: denied { mac_admin } for pid=2103 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:36:25.380000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 12 00:36:25.380000 audit[2103]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40006a9980 a1=4000046390 a2=40006a9950 a3=25 items=0 ppid=1 pid=2103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:25.380000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 12 00:36:25.380000 audit[2103]: AVC avc: denied { mac_admin } for pid=2103 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:36:25.394625 kubelet[2103]: I0712 00:36:25.381687 2103 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Jul 12 00:36:25.394625 kubelet[2103]: I0712 00:36:25.381718 2103 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Jul 12 00:36:25.394625 kubelet[2103]: I0712 00:36:25.381740 2103 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:36:25.394625 kubelet[2103]: I0712 00:36:25.383006 2103 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:36:25.394625 kubelet[2103]: I0712 00:36:25.384598 2103 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 12 00:36:25.394625 kubelet[2103]: I0712 00:36:25.384715 2103 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 12 00:36:25.394625 kubelet[2103]: I0712 00:36:25.384846 2103 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:36:25.394625 kubelet[2103]: E0712 00:36:25.389990 2103 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:36:25.395379 kubelet[2103]: I0712 00:36:25.395361 2103 server.go:449] "Adding debug handlers to kubelet server" Jul 12 00:36:25.380000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 12 00:36:25.397645 kernel: audit: type=1401 audit(1752280585.380:217): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 12 00:36:25.380000 audit[2103]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000be34e0 a1=40000463a8 a2=40006a9a10 a3=25 items=0 ppid=1 pid=2103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:25.400772 kubelet[2103]: I0712 00:36:25.395956 2103 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:36:25.404172 kubelet[2103]: I0712 00:36:25.404136 2103 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:36:25.404172 kubelet[2103]: I0712 00:36:25.404163 2103 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 12 00:36:25.404254 kubelet[2103]: I0712 00:36:25.404182 2103 kubelet.go:2321] "Starting kubelet main sync loop" Jul 12 00:36:25.404254 kubelet[2103]: E0712 00:36:25.404222 2103 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:36:25.404378 kernel: audit: type=1300 audit(1752280585.380:217): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000be34e0 a1=40000463a8 a2=40006a9a10 a3=25 items=0 ppid=1 pid=2103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:25.380000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 12 00:36:25.405869 kubelet[2103]: I0712 00:36:25.405849 2103 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:36:25.405954 kubelet[2103]: I0712 00:36:25.405944 2103 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:36:25.406094 kubelet[2103]: I0712 00:36:25.406073 2103 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:36:25.408806 kernel: audit: type=1327 audit(1752280585.380:217): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 12 00:36:25.451123 kubelet[2103]: I0712 00:36:25.451091 2103 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 12 00:36:25.451123 kubelet[2103]: I0712 00:36:25.451113 2103 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 12 00:36:25.451123 kubelet[2103]: I0712 00:36:25.451134 2103 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:36:25.451305 kubelet[2103]: I0712 00:36:25.451263 2103 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 00:36:25.451305 kubelet[2103]: I0712 00:36:25.451273 2103 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 00:36:25.451305 kubelet[2103]: I0712 00:36:25.451292 2103 policy_none.go:49] "None policy: Start" Jul 12 00:36:25.451845 kubelet[2103]: I0712 00:36:25.451816 2103 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 12 00:36:25.451898 kubelet[2103]: I0712 00:36:25.451856 2103 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:36:25.452034 kubelet[2103]: I0712 00:36:25.452016 2103 state_mem.go:75] "Updated machine memory state" Jul 12 00:36:25.453158 kubelet[2103]: I0712 00:36:25.453135 2103 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:36:25.451000 audit[2103]: AVC avc: denied { mac_admin } for pid=2103 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:36:25.451000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 12 00:36:25.451000 audit[2103]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4001233a10 a1=4001208ea0 a2=40012339e0 a3=25 items=0 ppid=1 pid=2103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:25.451000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 12 00:36:25.453352 kubelet[2103]: I0712 00:36:25.453218 2103 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Jul 12 00:36:25.453352 kubelet[2103]: I0712 00:36:25.453344 2103 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:36:25.453415 kubelet[2103]: I0712 00:36:25.453355 2103 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:36:25.453750 kubelet[2103]: I0712 00:36:25.453728 2103 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:36:25.556754 kubelet[2103]: I0712 00:36:25.556699 2103 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:36:25.563794 kubelet[2103]: I0712 00:36:25.563760 2103 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 12 00:36:25.563882 kubelet[2103]: I0712 00:36:25.563844 2103 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 12 00:36:25.586286 kubelet[2103]: I0712 00:36:25.586176 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:36:25.586286 kubelet[2103]: I0712 00:36:25.586218 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:36:25.586286 kubelet[2103]: I0712 00:36:25.586236 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1dc4853c1487ffdb4db23ce77eca59b6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1dc4853c1487ffdb4db23ce77eca59b6\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:36:25.586286 kubelet[2103]: I0712 00:36:25.586252 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1dc4853c1487ffdb4db23ce77eca59b6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1dc4853c1487ffdb4db23ce77eca59b6\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:36:25.586286 kubelet[2103]: I0712 00:36:25.586274 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:36:25.586476 kubelet[2103]: I0712 00:36:25.586290 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:36:25.586476 kubelet[2103]: I0712 00:36:25.586305 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:36:25.586476 kubelet[2103]: I0712 00:36:25.586320 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 12 00:36:25.586476 kubelet[2103]: I0712 00:36:25.586334 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1dc4853c1487ffdb4db23ce77eca59b6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1dc4853c1487ffdb4db23ce77eca59b6\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:36:25.811733 kubelet[2103]: E0712 00:36:25.811698 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:25.811837 kubelet[2103]: E0712 00:36:25.811740 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:25.811837 kubelet[2103]: E0712 00:36:25.811831 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:26.379244 kubelet[2103]: I0712 00:36:26.379212 2103 apiserver.go:52] "Watching apiserver" Jul 12 00:36:26.384876 kubelet[2103]: I0712 00:36:26.384843 2103 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 12 00:36:26.432476 kubelet[2103]: E0712 00:36:26.432447 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:26.433079 kubelet[2103]: E0712 00:36:26.433058 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:26.447674 kubelet[2103]: E0712 00:36:26.447639 2103 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 12 00:36:26.447820 kubelet[2103]: E0712 00:36:26.447782 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:26.450118 kubelet[2103]: I0712 00:36:26.450074 2103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.45006098 podStartE2EDuration="1.45006098s" podCreationTimestamp="2025-07-12 00:36:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:36:26.449953516 +0000 UTC m=+1.125160414" watchObservedRunningTime="2025-07-12 00:36:26.45006098 +0000 UTC m=+1.125267918" Jul 12 00:36:26.457111 kubelet[2103]: I0712 00:36:26.457064 2103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.457050263 podStartE2EDuration="1.457050263s" podCreationTimestamp="2025-07-12 00:36:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:36:26.456683242 +0000 UTC m=+1.131890140" watchObservedRunningTime="2025-07-12 00:36:26.457050263 +0000 UTC m=+1.132257121" Jul 12 00:36:26.465621 kubelet[2103]: I0712 00:36:26.465553 2103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.465536005 podStartE2EDuration="1.465536005s" podCreationTimestamp="2025-07-12 00:36:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:36:26.465507908 +0000 UTC m=+1.140714806" watchObservedRunningTime="2025-07-12 00:36:26.465536005 +0000 UTC m=+1.140742903" Jul 12 00:36:27.433691 kubelet[2103]: E0712 00:36:27.433665 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:28.078391 kubelet[2103]: E0712 00:36:28.078354 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:32.500789 kubelet[2103]: I0712 00:36:32.500752 2103 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 00:36:32.501546 env[1324]: time="2025-07-12T00:36:32.501495551Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 00:36:32.501798 kubelet[2103]: I0712 00:36:32.501736 2103 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 00:36:32.635164 kubelet[2103]: E0712 00:36:32.635123 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:33.238532 kubelet[2103]: I0712 00:36:33.238488 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/37e2b4f3-a801-4e23-ae5d-71a7f2affcc9-kube-proxy\") pod \"kube-proxy-df6vt\" (UID: \"37e2b4f3-a801-4e23-ae5d-71a7f2affcc9\") " pod="kube-system/kube-proxy-df6vt" Jul 12 00:36:33.238836 kubelet[2103]: I0712 00:36:33.238816 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/37e2b4f3-a801-4e23-ae5d-71a7f2affcc9-xtables-lock\") pod \"kube-proxy-df6vt\" (UID: \"37e2b4f3-a801-4e23-ae5d-71a7f2affcc9\") " pod="kube-system/kube-proxy-df6vt" Jul 12 00:36:33.238930 kubelet[2103]: I0712 00:36:33.238916 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/37e2b4f3-a801-4e23-ae5d-71a7f2affcc9-lib-modules\") pod \"kube-proxy-df6vt\" (UID: \"37e2b4f3-a801-4e23-ae5d-71a7f2affcc9\") " pod="kube-system/kube-proxy-df6vt" Jul 12 00:36:33.239032 kubelet[2103]: I0712 00:36:33.239016 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkhwr\" (UniqueName: \"kubernetes.io/projected/37e2b4f3-a801-4e23-ae5d-71a7f2affcc9-kube-api-access-fkhwr\") pod \"kube-proxy-df6vt\" (UID: \"37e2b4f3-a801-4e23-ae5d-71a7f2affcc9\") " pod="kube-system/kube-proxy-df6vt" Jul 12 00:36:33.328042 kubelet[2103]: E0712 00:36:33.328008 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:33.349907 kubelet[2103]: I0712 00:36:33.349873 2103 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 12 00:36:33.443131 kubelet[2103]: E0712 00:36:33.443077 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:33.443451 kubelet[2103]: E0712 00:36:33.443419 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:33.520811 kubelet[2103]: E0712 00:36:33.520698 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:33.521339 env[1324]: time="2025-07-12T00:36:33.521288968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-df6vt,Uid:37e2b4f3-a801-4e23-ae5d-71a7f2affcc9,Namespace:kube-system,Attempt:0,}" Jul 12 00:36:33.539300 env[1324]: time="2025-07-12T00:36:33.539236228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:36:33.539474 env[1324]: time="2025-07-12T00:36:33.539440184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:36:33.539563 env[1324]: time="2025-07-12T00:36:33.539535659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:36:33.539850 env[1324]: time="2025-07-12T00:36:33.539774628Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/47348e976e9115998a442bcc25063c85a5a50684a3dc51f327530af75b8efe6a pid=2161 runtime=io.containerd.runc.v2 Jul 12 00:36:33.612063 env[1324]: time="2025-07-12T00:36:33.612025837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-df6vt,Uid:37e2b4f3-a801-4e23-ae5d-71a7f2affcc9,Namespace:kube-system,Attempt:0,} returns sandbox id \"47348e976e9115998a442bcc25063c85a5a50684a3dc51f327530af75b8efe6a\"" Jul 12 00:36:33.612864 kubelet[2103]: E0712 00:36:33.612820 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:33.615576 env[1324]: time="2025-07-12T00:36:33.615540381Z" level=info msg="CreateContainer within sandbox \"47348e976e9115998a442bcc25063c85a5a50684a3dc51f327530af75b8efe6a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 00:36:33.630820 env[1324]: time="2025-07-12T00:36:33.630777435Z" level=info msg="CreateContainer within sandbox \"47348e976e9115998a442bcc25063c85a5a50684a3dc51f327530af75b8efe6a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0551ed67aa0a4969e049bf0e46295ca0fbc701a81474bc2b740912066e3c73e7\"" Jul 12 00:36:33.631819 env[1324]: time="2025-07-12T00:36:33.631732869Z" level=info msg="StartContainer for \"0551ed67aa0a4969e049bf0e46295ca0fbc701a81474bc2b740912066e3c73e7\"" Jul 12 00:36:33.642255 kubelet[2103]: I0712 00:36:33.642168 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7thwl\" (UniqueName: \"kubernetes.io/projected/91f57532-2bb4-4fa8-b7fc-e0d3fb96f8b9-kube-api-access-7thwl\") pod \"tigera-operator-5bf8dfcb4-qwrsh\" (UID: \"91f57532-2bb4-4fa8-b7fc-e0d3fb96f8b9\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-qwrsh" Jul 12 00:36:33.642255 kubelet[2103]: I0712 00:36:33.642211 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/91f57532-2bb4-4fa8-b7fc-e0d3fb96f8b9-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-qwrsh\" (UID: \"91f57532-2bb4-4fa8-b7fc-e0d3fb96f8b9\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-qwrsh" Jul 12 00:36:33.683506 env[1324]: time="2025-07-12T00:36:33.682491984Z" level=info msg="StartContainer for \"0551ed67aa0a4969e049bf0e46295ca0fbc701a81474bc2b740912066e3c73e7\" returns successfully" Jul 12 00:36:33.879000 audit[2266]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2266 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:33.881651 kernel: kauditd_printk_skb: 4 callbacks suppressed Jul 12 00:36:33.881716 kernel: audit: type=1325 audit(1752280593.879:219): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2266 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:33.883442 kernel: audit: type=1300 audit(1752280593.879:219): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe391d8e0 a2=0 a3=1 items=0 ppid=2213 pid=2266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:33.879000 audit[2266]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe391d8e0 a2=0 a3=1 items=0 ppid=2213 pid=2266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:33.879000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 12 00:36:33.888974 kernel: audit: type=1327 audit(1752280593.879:219): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 12 00:36:33.889030 kernel: audit: type=1325 audit(1752280593.879:220): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2267 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:36:33.879000 audit[2267]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2267 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:36:33.890670 env[1324]: time="2025-07-12T00:36:33.890634136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-qwrsh,Uid:91f57532-2bb4-4fa8-b7fc-e0d3fb96f8b9,Namespace:tigera-operator,Attempt:0,}" Jul 12 00:36:33.879000 audit[2267]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffeb50de80 a2=0 a3=1 items=0 ppid=2213 pid=2267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:33.894891 kernel: audit: type=1300 audit(1752280593.879:220): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffeb50de80 a2=0 a3=1 items=0 ppid=2213 pid=2267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:33.879000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 12 00:36:33.896920 kernel: audit: type=1327 audit(1752280593.879:220): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 12 00:36:33.880000 audit[2268]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2268 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:33.898761 kernel: audit: type=1325 audit(1752280593.880:221): table=nat:40 family=2 entries=1 op=nft_register_chain pid=2268 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:33.898799 kernel: audit: type=1300 audit(1752280593.880:221): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc74b62b0 a2=0 a3=1 items=0 ppid=2213 pid=2268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:33.880000 audit[2268]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc74b62b0 a2=0 a3=1 items=0 ppid=2213 pid=2268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:33.880000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 12 00:36:33.904167 kernel: audit: type=1327 audit(1752280593.880:221): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 12 00:36:33.904232 kernel: audit: type=1325 audit(1752280593.881:222): table=filter:41 family=2 entries=1 op=nft_register_chain pid=2269 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:33.881000 audit[2269]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2269 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:33.881000 audit[2269]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdc7bff10 a2=0 a3=1 items=0 ppid=2213 pid=2269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:33.881000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 12 00:36:33.888000 audit[2270]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2270 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:36:33.888000 audit[2270]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff83da6d0 a2=0 a3=1 items=0 ppid=2213 pid=2270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:33.888000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 12 00:36:33.889000 audit[2271]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2271 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:36:33.889000 audit[2271]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff56328a0 a2=0 a3=1 items=0 ppid=2213 pid=2271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:33.889000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 12 00:36:33.914282 env[1324]: time="2025-07-12T00:36:33.914106806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:36:33.914282 env[1324]: time="2025-07-12T00:36:33.914144500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:36:33.914282 env[1324]: time="2025-07-12T00:36:33.914158705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:36:33.914426 env[1324]: time="2025-07-12T00:36:33.914358379Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/719ad605c52147407933b291e9a3ada26a5996b761f11202ed69816ae57b3013 pid=2278 runtime=io.containerd.runc.v2 Jul 12 00:36:33.961137 env[1324]: time="2025-07-12T00:36:33.961098682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-qwrsh,Uid:91f57532-2bb4-4fa8-b7fc-e0d3fb96f8b9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"719ad605c52147407933b291e9a3ada26a5996b761f11202ed69816ae57b3013\"" Jul 12 00:36:33.962848 env[1324]: time="2025-07-12T00:36:33.962817880Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 12 00:36:33.982000 audit[2312]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2312 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:33.982000 audit[2312]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffee9a37e0 a2=0 a3=1 items=0 ppid=2213 pid=2312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:33.982000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 12 00:36:33.986000 audit[2314]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2314 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:33.986000 audit[2314]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc3f4f080 a2=0 a3=1 items=0 ppid=2213 pid=2314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:33.986000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jul 12 00:36:33.989000 audit[2317]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2317 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:33.989000 audit[2317]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe7f85630 a2=0 a3=1 items=0 ppid=2213 pid=2317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:33.989000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jul 12 00:36:33.990000 audit[2318]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2318 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:33.990000 audit[2318]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe3290ba0 a2=0 a3=1 items=0 ppid=2213 pid=2318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:33.990000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 12 00:36:33.992000 audit[2320]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2320 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:33.992000 audit[2320]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd3876550 a2=0 a3=1 items=0 ppid=2213 pid=2320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:33.992000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 12 00:36:33.993000 audit[2321]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2321 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:33.993000 audit[2321]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe8734ee0 a2=0 a3=1 items=0 ppid=2213 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:33.993000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 12 00:36:33.995000 audit[2323]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2323 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:33.995000 audit[2323]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff1ae1bb0 a2=0 a3=1 items=0 ppid=2213 pid=2323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:33.995000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 12 00:36:33.998000 audit[2326]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2326 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:33.998000 audit[2326]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffffecd4040 a2=0 a3=1 items=0 ppid=2213 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:33.998000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jul 12 00:36:33.999000 audit[2327]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2327 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:33.999000 audit[2327]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcb99d290 a2=0 a3=1 items=0 ppid=2213 pid=2327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:33.999000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 12 00:36:34.001000 audit[2329]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2329 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:34.001000 audit[2329]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc6eb28e0 a2=0 a3=1 items=0 ppid=2213 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.001000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 12 00:36:34.003000 audit[2330]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2330 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:34.003000 audit[2330]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe92f1a10 a2=0 a3=1 items=0 ppid=2213 pid=2330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.003000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 12 00:36:34.005000 audit[2332]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2332 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:34.005000 audit[2332]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff2e135a0 a2=0 a3=1 items=0 ppid=2213 pid=2332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.005000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 12 00:36:34.008000 audit[2335]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2335 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:34.008000 audit[2335]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff2f592c0 a2=0 a3=1 items=0 ppid=2213 pid=2335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.008000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 12 00:36:34.011000 audit[2338]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2338 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:34.011000 audit[2338]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc93bc940 a2=0 a3=1 items=0 ppid=2213 pid=2338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.011000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 12 00:36:34.013000 audit[2339]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2339 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:34.013000 audit[2339]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe1f6a500 a2=0 a3=1 items=0 ppid=2213 pid=2339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.013000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 12 00:36:34.015000 audit[2341]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2341 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:34.015000 audit[2341]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffcf67dfd0 a2=0 a3=1 items=0 ppid=2213 pid=2341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.015000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 12 00:36:34.018000 audit[2344]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2344 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:34.018000 audit[2344]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc7614e30 a2=0 a3=1 items=0 ppid=2213 pid=2344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.018000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 12 00:36:34.019000 audit[2345]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2345 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:34.019000 audit[2345]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff08c8850 a2=0 a3=1 items=0 ppid=2213 pid=2345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.019000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 12 00:36:34.021000 audit[2347]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2347 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 12 00:36:34.021000 audit[2347]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffecef7af0 a2=0 a3=1 items=0 ppid=2213 pid=2347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.021000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 12 00:36:34.043000 audit[2353]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2353 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:36:34.043000 audit[2353]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe5f5f3a0 a2=0 a3=1 items=0 ppid=2213 pid=2353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.043000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:36:34.054000 audit[2353]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2353 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:36:34.054000 audit[2353]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffe5f5f3a0 a2=0 a3=1 items=0 ppid=2213 pid=2353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.054000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:36:34.055000 audit[2358]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2358 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:36:34.055000 audit[2358]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffda1b82c0 a2=0 a3=1 items=0 ppid=2213 pid=2358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.055000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 12 00:36:34.058000 audit[2360]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2360 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:36:34.058000 audit[2360]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd4cee590 a2=0 a3=1 items=0 ppid=2213 pid=2360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.058000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jul 12 00:36:34.061000 audit[2363]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2363 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:36:34.061000 audit[2363]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc1e00410 a2=0 a3=1 items=0 ppid=2213 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.061000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jul 12 00:36:34.063000 audit[2364]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2364 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:36:34.063000 audit[2364]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe767ce20 a2=0 a3=1 items=0 ppid=2213 pid=2364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.063000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 12 00:36:34.065000 audit[2366]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2366 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:36:34.065000 audit[2366]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff289b010 a2=0 a3=1 items=0 ppid=2213 pid=2366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.065000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 12 00:36:34.066000 audit[2367]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2367 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:36:34.066000 audit[2367]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd9450c40 a2=0 a3=1 items=0 ppid=2213 pid=2367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.066000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 12 00:36:34.068000 audit[2369]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2369 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:36:34.068000 audit[2369]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffeb00d520 a2=0 a3=1 items=0 ppid=2213 pid=2369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.068000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jul 12 00:36:34.072000 audit[2372]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2372 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:36:34.072000 audit[2372]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffe9d61c20 a2=0 a3=1 items=0 ppid=2213 pid=2372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.072000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 12 00:36:34.073000 audit[2373]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2373 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:36:34.073000 audit[2373]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd98cc130 a2=0 a3=1 items=0 ppid=2213 pid=2373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.073000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 12 00:36:34.075000 audit[2375]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2375 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:36:34.075000 audit[2375]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe567bcb0 a2=0 a3=1 items=0 ppid=2213 pid=2375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.075000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 12 00:36:34.076000 audit[2376]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2376 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:36:34.076000 audit[2376]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffd986cb0 a2=0 a3=1 items=0 ppid=2213 pid=2376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.076000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 12 00:36:34.078000 audit[2378]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2378 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:36:34.078000 audit[2378]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc4cc5960 a2=0 a3=1 items=0 ppid=2213 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.078000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 12 00:36:34.083000 audit[2381]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2381 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:36:34.083000 audit[2381]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe3526f80 a2=0 a3=1 items=0 ppid=2213 pid=2381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.083000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 12 00:36:34.086000 audit[2384]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2384 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:36:34.086000 audit[2384]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd5a17f70 a2=0 a3=1 items=0 ppid=2213 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.086000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jul 12 00:36:34.087000 audit[2385]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2385 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:36:34.087000 audit[2385]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdb202310 a2=0 a3=1 items=0 ppid=2213 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.087000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 12 00:36:34.090000 audit[2387]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2387 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:36:34.090000 audit[2387]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffe81e0d30 a2=0 a3=1 items=0 ppid=2213 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.090000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 12 00:36:34.093000 audit[2390]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2390 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:36:34.093000 audit[2390]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffc8e321b0 a2=0 a3=1 items=0 ppid=2213 pid=2390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.093000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 12 00:36:34.095000 audit[2391]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2391 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:36:34.095000 audit[2391]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc2734d50 a2=0 a3=1 items=0 ppid=2213 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.095000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 12 00:36:34.097000 audit[2393]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2393 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:36:34.097000 audit[2393]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffe62751d0 a2=0 a3=1 items=0 ppid=2213 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.097000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 12 00:36:34.098000 audit[2394]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2394 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:36:34.098000 audit[2394]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc934e9d0 a2=0 a3=1 items=0 ppid=2213 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.098000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 12 00:36:34.101000 audit[2396]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2396 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:36:34.101000 audit[2396]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffcf15c2a0 a2=0 a3=1 items=0 ppid=2213 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.101000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 12 00:36:34.104000 audit[2399]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2399 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 12 00:36:34.104000 audit[2399]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe88bb7d0 a2=0 a3=1 items=0 ppid=2213 pid=2399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.104000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 12 00:36:34.107000 audit[2401]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2401 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 12 00:36:34.107000 audit[2401]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffec4c07a0 a2=0 a3=1 items=0 ppid=2213 pid=2401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.107000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:36:34.108000 audit[2401]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2401 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 12 00:36:34.108000 audit[2401]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffec4c07a0 a2=0 a3=1 items=0 ppid=2213 pid=2401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:34.108000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:36:34.446133 kubelet[2103]: E0712 00:36:34.446090 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:34.455137 kubelet[2103]: I0712 00:36:34.455068 2103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-df6vt" podStartSLOduration=1.455052634 podStartE2EDuration="1.455052634s" podCreationTimestamp="2025-07-12 00:36:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:36:34.454409248 +0000 UTC m=+9.129616146" watchObservedRunningTime="2025-07-12 00:36:34.455052634 +0000 UTC m=+9.130259492" Jul 12 00:36:35.101785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3946847496.mount: Deactivated successfully. Jul 12 00:36:36.418014 env[1324]: time="2025-07-12T00:36:36.417956830Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:36.419339 env[1324]: time="2025-07-12T00:36:36.419310576Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:36.420827 env[1324]: time="2025-07-12T00:36:36.420797124Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:36.422399 env[1324]: time="2025-07-12T00:36:36.422363537Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:36.422899 env[1324]: time="2025-07-12T00:36:36.422870457Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 12 00:36:36.426145 env[1324]: time="2025-07-12T00:36:36.426109597Z" level=info msg="CreateContainer within sandbox \"719ad605c52147407933b291e9a3ada26a5996b761f11202ed69816ae57b3013\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 12 00:36:36.435696 env[1324]: time="2025-07-12T00:36:36.435645959Z" level=info msg="CreateContainer within sandbox \"719ad605c52147407933b291e9a3ada26a5996b761f11202ed69816ae57b3013\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d19ca99a7c8fc6b1f2a103357ec183efa15166afec895885ef43539e36371f7d\"" Jul 12 00:36:36.437459 env[1324]: time="2025-07-12T00:36:36.437423879Z" level=info msg="StartContainer for \"d19ca99a7c8fc6b1f2a103357ec183efa15166afec895885ef43539e36371f7d\"" Jul 12 00:36:36.494777 env[1324]: time="2025-07-12T00:36:36.494726041Z" level=info msg="StartContainer for \"d19ca99a7c8fc6b1f2a103357ec183efa15166afec895885ef43539e36371f7d\" returns successfully" Jul 12 00:36:38.089471 kubelet[2103]: E0712 00:36:38.089441 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:38.108508 kubelet[2103]: I0712 00:36:38.108444 2103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-qwrsh" podStartSLOduration=2.646947319 podStartE2EDuration="5.108429535s" podCreationTimestamp="2025-07-12 00:36:33 +0000 UTC" firstStartedPulling="2025-07-12 00:36:33.962288124 +0000 UTC m=+8.637495022" lastFinishedPulling="2025-07-12 00:36:36.42377038 +0000 UTC m=+11.098977238" observedRunningTime="2025-07-12 00:36:37.46368108 +0000 UTC m=+12.138887978" watchObservedRunningTime="2025-07-12 00:36:38.108429535 +0000 UTC m=+12.783636433" Jul 12 00:36:41.843163 sudo[1486]: pam_unix(sudo:session): session closed for user root Jul 12 00:36:41.842000 audit[1486]: USER_END pid=1486 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 12 00:36:41.847266 kernel: kauditd_printk_skb: 143 callbacks suppressed Jul 12 00:36:41.847328 kernel: audit: type=1106 audit(1752280601.842:270): pid=1486 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 12 00:36:41.842000 audit[1486]: CRED_DISP pid=1486 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 12 00:36:41.850320 kernel: audit: type=1104 audit(1752280601.842:271): pid=1486 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 12 00:36:41.850883 sshd[1480]: pam_unix(sshd:session): session closed for user core Jul 12 00:36:41.852000 audit[1480]: USER_END pid=1480 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:36:41.854899 systemd[1]: sshd@6-10.0.0.111:22-10.0.0.1:40434.service: Deactivated successfully. Jul 12 00:36:41.855858 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 00:36:41.855865 systemd-logind[1309]: Session 7 logged out. Waiting for processes to exit. Jul 12 00:36:41.856765 systemd-logind[1309]: Removed session 7. Jul 12 00:36:41.852000 audit[1480]: CRED_DISP pid=1480 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:36:41.861118 kernel: audit: type=1106 audit(1752280601.852:272): pid=1480 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:36:41.861212 kernel: audit: type=1104 audit(1752280601.852:273): pid=1480 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:36:41.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.111:22-10.0.0.1:40434 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:36:41.864405 kernel: audit: type=1131 audit(1752280601.854:274): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.111:22-10.0.0.1:40434 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:36:41.988812 update_engine[1311]: I0712 00:36:41.988430 1311 update_attempter.cc:509] Updating boot flags... Jul 12 00:36:42.570000 audit[2509]: NETFILTER_CFG table=filter:89 family=2 entries=14 op=nft_register_rule pid=2509 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:36:42.574398 kernel: audit: type=1325 audit(1752280602.570:275): table=filter:89 family=2 entries=14 op=nft_register_rule pid=2509 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:36:42.570000 audit[2509]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffffb1b7c10 a2=0 a3=1 items=0 ppid=2213 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:42.570000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:36:42.583016 kernel: audit: type=1300 audit(1752280602.570:275): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffffb1b7c10 a2=0 a3=1 items=0 ppid=2213 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:42.583093 kernel: audit: type=1327 audit(1752280602.570:275): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:36:42.583000 audit[2509]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2509 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:36:42.583000 audit[2509]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffb1b7c10 a2=0 a3=1 items=0 ppid=2213 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:42.589828 kernel: audit: type=1325 audit(1752280602.583:276): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2509 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:36:42.589894 kernel: audit: type=1300 audit(1752280602.583:276): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffb1b7c10 a2=0 a3=1 items=0 ppid=2213 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:42.583000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:36:42.601000 audit[2511]: NETFILTER_CFG table=filter:91 family=2 entries=15 op=nft_register_rule pid=2511 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:36:42.601000 audit[2511]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd96cf6e0 a2=0 a3=1 items=0 ppid=2213 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:42.601000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:36:42.605000 audit[2511]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2511 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:36:42.605000 audit[2511]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd96cf6e0 a2=0 a3=1 items=0 ppid=2213 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:42.605000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:36:46.603000 audit[2514]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2514 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:36:46.603000 audit[2514]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffd7868a90 a2=0 a3=1 items=0 ppid=2213 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:46.603000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:36:46.609000 audit[2514]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2514 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:36:46.609000 audit[2514]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd7868a90 a2=0 a3=1 items=0 ppid=2213 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:46.609000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:36:46.625000 audit[2516]: NETFILTER_CFG table=filter:95 family=2 entries=18 op=nft_register_rule pid=2516 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:36:46.625000 audit[2516]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffffb576ab0 a2=0 a3=1 items=0 ppid=2213 pid=2516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:46.625000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:36:46.629000 audit[2516]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2516 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:36:46.629000 audit[2516]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffb576ab0 a2=0 a3=1 items=0 ppid=2213 pid=2516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:46.629000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:36:46.639444 kubelet[2103]: I0712 00:36:46.639357 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5f8k\" (UniqueName: \"kubernetes.io/projected/50a29d9b-cb51-4cc5-8954-ecdde1096fff-kube-api-access-g5f8k\") pod \"calico-typha-5f69b79bf8-h5ck9\" (UID: \"50a29d9b-cb51-4cc5-8954-ecdde1096fff\") " pod="calico-system/calico-typha-5f69b79bf8-h5ck9" Jul 12 00:36:46.639763 kubelet[2103]: I0712 00:36:46.639461 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/50a29d9b-cb51-4cc5-8954-ecdde1096fff-typha-certs\") pod \"calico-typha-5f69b79bf8-h5ck9\" (UID: \"50a29d9b-cb51-4cc5-8954-ecdde1096fff\") " pod="calico-system/calico-typha-5f69b79bf8-h5ck9" Jul 12 00:36:46.639763 kubelet[2103]: I0712 00:36:46.639482 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50a29d9b-cb51-4cc5-8954-ecdde1096fff-tigera-ca-bundle\") pod \"calico-typha-5f69b79bf8-h5ck9\" (UID: \"50a29d9b-cb51-4cc5-8954-ecdde1096fff\") " pod="calico-system/calico-typha-5f69b79bf8-h5ck9" Jul 12 00:36:46.895447 kubelet[2103]: E0712 00:36:46.895340 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:46.895811 env[1324]: time="2025-07-12T00:36:46.895772389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f69b79bf8-h5ck9,Uid:50a29d9b-cb51-4cc5-8954-ecdde1096fff,Namespace:calico-system,Attempt:0,}" Jul 12 00:36:46.917138 env[1324]: time="2025-07-12T00:36:46.917057427Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:36:46.917138 env[1324]: time="2025-07-12T00:36:46.917136722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:36:46.917309 env[1324]: time="2025-07-12T00:36:46.917161527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:36:46.917341 env[1324]: time="2025-07-12T00:36:46.917319317Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0ca02565f54980351f4a6d93325db3e96b345d64ffec20f799c18b6a7b513a80 pid=2527 runtime=io.containerd.runc.v2 Jul 12 00:36:46.941682 kubelet[2103]: I0712 00:36:46.941620 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/78777352-9861-450c-adfe-86e885a34db3-flexvol-driver-host\") pod \"calico-node-kxbkw\" (UID: \"78777352-9861-450c-adfe-86e885a34db3\") " pod="calico-system/calico-node-kxbkw" Jul 12 00:36:46.941682 kubelet[2103]: I0712 00:36:46.941674 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/78777352-9861-450c-adfe-86e885a34db3-cni-bin-dir\") pod \"calico-node-kxbkw\" (UID: \"78777352-9861-450c-adfe-86e885a34db3\") " pod="calico-system/calico-node-kxbkw" Jul 12 00:36:46.941905 kubelet[2103]: I0712 00:36:46.941695 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/78777352-9861-450c-adfe-86e885a34db3-cni-net-dir\") pod \"calico-node-kxbkw\" (UID: \"78777352-9861-450c-adfe-86e885a34db3\") " pod="calico-system/calico-node-kxbkw" Jul 12 00:36:46.941905 kubelet[2103]: I0712 00:36:46.941715 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/78777352-9861-450c-adfe-86e885a34db3-var-lib-calico\") pod \"calico-node-kxbkw\" (UID: \"78777352-9861-450c-adfe-86e885a34db3\") " pod="calico-system/calico-node-kxbkw" Jul 12 00:36:46.941905 kubelet[2103]: I0712 00:36:46.941730 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78777352-9861-450c-adfe-86e885a34db3-lib-modules\") pod \"calico-node-kxbkw\" (UID: \"78777352-9861-450c-adfe-86e885a34db3\") " pod="calico-system/calico-node-kxbkw" Jul 12 00:36:46.941905 kubelet[2103]: I0712 00:36:46.941774 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/78777352-9861-450c-adfe-86e885a34db3-policysync\") pod \"calico-node-kxbkw\" (UID: \"78777352-9861-450c-adfe-86e885a34db3\") " pod="calico-system/calico-node-kxbkw" Jul 12 00:36:46.941905 kubelet[2103]: I0712 00:36:46.941811 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78777352-9861-450c-adfe-86e885a34db3-xtables-lock\") pod \"calico-node-kxbkw\" (UID: \"78777352-9861-450c-adfe-86e885a34db3\") " pod="calico-system/calico-node-kxbkw" Jul 12 00:36:46.942108 kubelet[2103]: I0712 00:36:46.941832 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/78777352-9861-450c-adfe-86e885a34db3-cni-log-dir\") pod \"calico-node-kxbkw\" (UID: \"78777352-9861-450c-adfe-86e885a34db3\") " pod="calico-system/calico-node-kxbkw" Jul 12 00:36:46.942108 kubelet[2103]: I0712 00:36:46.941848 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78777352-9861-450c-adfe-86e885a34db3-tigera-ca-bundle\") pod \"calico-node-kxbkw\" (UID: \"78777352-9861-450c-adfe-86e885a34db3\") " pod="calico-system/calico-node-kxbkw" Jul 12 00:36:46.942108 kubelet[2103]: I0712 00:36:46.941865 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6ncf\" (UniqueName: \"kubernetes.io/projected/78777352-9861-450c-adfe-86e885a34db3-kube-api-access-z6ncf\") pod \"calico-node-kxbkw\" (UID: \"78777352-9861-450c-adfe-86e885a34db3\") " pod="calico-system/calico-node-kxbkw" Jul 12 00:36:46.942108 kubelet[2103]: I0712 00:36:46.941885 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/78777352-9861-450c-adfe-86e885a34db3-var-run-calico\") pod \"calico-node-kxbkw\" (UID: \"78777352-9861-450c-adfe-86e885a34db3\") " pod="calico-system/calico-node-kxbkw" Jul 12 00:36:46.942108 kubelet[2103]: I0712 00:36:46.941902 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/78777352-9861-450c-adfe-86e885a34db3-node-certs\") pod \"calico-node-kxbkw\" (UID: \"78777352-9861-450c-adfe-86e885a34db3\") " pod="calico-system/calico-node-kxbkw" Jul 12 00:36:47.006132 env[1324]: time="2025-07-12T00:36:47.006092431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f69b79bf8-h5ck9,Uid:50a29d9b-cb51-4cc5-8954-ecdde1096fff,Namespace:calico-system,Attempt:0,} returns sandbox id \"0ca02565f54980351f4a6d93325db3e96b345d64ffec20f799c18b6a7b513a80\"" Jul 12 00:36:47.007410 kubelet[2103]: E0712 00:36:47.006909 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:47.007961 env[1324]: time="2025-07-12T00:36:47.007932004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 12 00:36:47.044672 kubelet[2103]: E0712 00:36:47.044513 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.044672 kubelet[2103]: W0712 00:36:47.044533 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.044672 kubelet[2103]: E0712 00:36:47.044567 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.046598 kubelet[2103]: E0712 00:36:47.046532 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.046598 kubelet[2103]: W0712 00:36:47.046555 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.046598 kubelet[2103]: E0712 00:36:47.046569 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.050171 kubelet[2103]: E0712 00:36:47.050104 2103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79v58" podUID="6f21bec0-521a-455c-b964-ef73ea0151cf" Jul 12 00:36:47.051624 kubelet[2103]: E0712 00:36:47.051607 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.051776 kubelet[2103]: W0712 00:36:47.051761 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.051844 kubelet[2103]: E0712 00:36:47.051829 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.061202 kubelet[2103]: E0712 00:36:47.061173 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.061329 kubelet[2103]: W0712 00:36:47.061305 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.061437 kubelet[2103]: E0712 00:36:47.061421 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.132749 kubelet[2103]: E0712 00:36:47.132721 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.132879 kubelet[2103]: W0712 00:36:47.132863 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.132952 kubelet[2103]: E0712 00:36:47.132940 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.133210 kubelet[2103]: E0712 00:36:47.133199 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.133297 kubelet[2103]: W0712 00:36:47.133285 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.133375 kubelet[2103]: E0712 00:36:47.133363 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.133625 kubelet[2103]: E0712 00:36:47.133613 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.133706 kubelet[2103]: W0712 00:36:47.133694 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.133779 kubelet[2103]: E0712 00:36:47.133768 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.134161 kubelet[2103]: E0712 00:36:47.134149 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.134244 kubelet[2103]: W0712 00:36:47.134232 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.134319 kubelet[2103]: E0712 00:36:47.134308 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.134584 kubelet[2103]: E0712 00:36:47.134572 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.134664 kubelet[2103]: W0712 00:36:47.134652 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.134745 kubelet[2103]: E0712 00:36:47.134731 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.134972 kubelet[2103]: E0712 00:36:47.134961 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.135048 kubelet[2103]: W0712 00:36:47.135037 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.135110 kubelet[2103]: E0712 00:36:47.135092 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.135336 kubelet[2103]: E0712 00:36:47.135317 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.135442 kubelet[2103]: W0712 00:36:47.135428 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.135521 kubelet[2103]: E0712 00:36:47.135510 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.135749 kubelet[2103]: E0712 00:36:47.135738 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.135835 kubelet[2103]: W0712 00:36:47.135823 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.135896 kubelet[2103]: E0712 00:36:47.135884 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.136141 kubelet[2103]: E0712 00:36:47.136129 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.136213 kubelet[2103]: W0712 00:36:47.136201 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.136267 kubelet[2103]: E0712 00:36:47.136256 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.136508 kubelet[2103]: E0712 00:36:47.136496 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.136614 kubelet[2103]: W0712 00:36:47.136601 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.136696 kubelet[2103]: E0712 00:36:47.136684 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.136944 kubelet[2103]: E0712 00:36:47.136932 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.137019 kubelet[2103]: W0712 00:36:47.137006 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.137074 kubelet[2103]: E0712 00:36:47.137063 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.137286 kubelet[2103]: E0712 00:36:47.137274 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.137373 kubelet[2103]: W0712 00:36:47.137353 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.137480 kubelet[2103]: E0712 00:36:47.137467 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.137691 kubelet[2103]: E0712 00:36:47.137680 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.137765 kubelet[2103]: W0712 00:36:47.137753 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.137823 kubelet[2103]: E0712 00:36:47.137812 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.138019 kubelet[2103]: E0712 00:36:47.138007 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.138093 kubelet[2103]: W0712 00:36:47.138081 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.138151 kubelet[2103]: E0712 00:36:47.138140 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.138407 kubelet[2103]: E0712 00:36:47.138373 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.138490 kubelet[2103]: W0712 00:36:47.138465 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.138548 kubelet[2103]: E0712 00:36:47.138536 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.138799 kubelet[2103]: E0712 00:36:47.138775 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.138895 kubelet[2103]: W0712 00:36:47.138882 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.138965 kubelet[2103]: E0712 00:36:47.138941 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.139212 kubelet[2103]: E0712 00:36:47.139200 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.139289 kubelet[2103]: W0712 00:36:47.139276 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.139357 kubelet[2103]: E0712 00:36:47.139344 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.139589 kubelet[2103]: E0712 00:36:47.139578 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.139662 kubelet[2103]: W0712 00:36:47.139650 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.139721 kubelet[2103]: E0712 00:36:47.139709 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.139908 kubelet[2103]: E0712 00:36:47.139897 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.139978 kubelet[2103]: W0712 00:36:47.139966 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.140035 kubelet[2103]: E0712 00:36:47.140025 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.140233 kubelet[2103]: E0712 00:36:47.140222 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.140305 kubelet[2103]: W0712 00:36:47.140293 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.140373 kubelet[2103]: E0712 00:36:47.140362 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.143587 kubelet[2103]: E0712 00:36:47.143571 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.143686 kubelet[2103]: W0712 00:36:47.143673 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.143761 kubelet[2103]: E0712 00:36:47.143748 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.143885 kubelet[2103]: I0712 00:36:47.143856 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6f21bec0-521a-455c-b964-ef73ea0151cf-registration-dir\") pod \"csi-node-driver-79v58\" (UID: \"6f21bec0-521a-455c-b964-ef73ea0151cf\") " pod="calico-system/csi-node-driver-79v58" Jul 12 00:36:47.144197 kubelet[2103]: E0712 00:36:47.144179 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.144247 kubelet[2103]: W0712 00:36:47.144198 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.144247 kubelet[2103]: E0712 00:36:47.144216 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.144432 kubelet[2103]: E0712 00:36:47.144421 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.144432 kubelet[2103]: W0712 00:36:47.144432 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.144508 kubelet[2103]: E0712 00:36:47.144446 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.144731 kubelet[2103]: E0712 00:36:47.144716 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.144811 kubelet[2103]: W0712 00:36:47.144798 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.144870 kubelet[2103]: E0712 00:36:47.144859 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.144948 env[1324]: time="2025-07-12T00:36:47.144908931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kxbkw,Uid:78777352-9861-450c-adfe-86e885a34db3,Namespace:calico-system,Attempt:0,}" Jul 12 00:36:47.144993 kubelet[2103]: I0712 00:36:47.144924 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f21bec0-521a-455c-b964-ef73ea0151cf-kubelet-dir\") pod \"csi-node-driver-79v58\" (UID: \"6f21bec0-521a-455c-b964-ef73ea0151cf\") " pod="calico-system/csi-node-driver-79v58" Jul 12 00:36:47.145218 kubelet[2103]: E0712 00:36:47.145205 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.145312 kubelet[2103]: W0712 00:36:47.145300 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.145605 kubelet[2103]: E0712 00:36:47.145549 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.145741 kubelet[2103]: I0712 00:36:47.145719 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6f21bec0-521a-455c-b964-ef73ea0151cf-varrun\") pod \"csi-node-driver-79v58\" (UID: \"6f21bec0-521a-455c-b964-ef73ea0151cf\") " pod="calico-system/csi-node-driver-79v58" Jul 12 00:36:47.145991 kubelet[2103]: E0712 00:36:47.145908 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.145991 kubelet[2103]: W0712 00:36:47.145923 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.145991 kubelet[2103]: E0712 00:36:47.145940 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.146517 kubelet[2103]: E0712 00:36:47.146088 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.147190 kubelet[2103]: W0712 00:36:47.146095 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.147190 kubelet[2103]: E0712 00:36:47.146600 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.147190 kubelet[2103]: E0712 00:36:47.146853 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.147190 kubelet[2103]: W0712 00:36:47.146864 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.147190 kubelet[2103]: E0712 00:36:47.146875 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.147190 kubelet[2103]: E0712 00:36:47.147000 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.147190 kubelet[2103]: W0712 00:36:47.147010 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.147190 kubelet[2103]: E0712 00:36:47.147017 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.147190 kubelet[2103]: E0712 00:36:47.147137 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.147190 kubelet[2103]: W0712 00:36:47.147144 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.149190 kubelet[2103]: E0712 00:36:47.147151 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.149190 kubelet[2103]: I0712 00:36:47.147170 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65nfd\" (UniqueName: \"kubernetes.io/projected/6f21bec0-521a-455c-b964-ef73ea0151cf-kube-api-access-65nfd\") pod \"csi-node-driver-79v58\" (UID: \"6f21bec0-521a-455c-b964-ef73ea0151cf\") " pod="calico-system/csi-node-driver-79v58" Jul 12 00:36:47.149190 kubelet[2103]: E0712 00:36:47.147305 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.149190 kubelet[2103]: W0712 00:36:47.147313 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.149190 kubelet[2103]: E0712 00:36:47.147321 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.149190 kubelet[2103]: I0712 00:36:47.147343 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6f21bec0-521a-455c-b964-ef73ea0151cf-socket-dir\") pod \"csi-node-driver-79v58\" (UID: \"6f21bec0-521a-455c-b964-ef73ea0151cf\") " pod="calico-system/csi-node-driver-79v58" Jul 12 00:36:47.149190 kubelet[2103]: E0712 00:36:47.147522 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.149190 kubelet[2103]: W0712 00:36:47.147532 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.149489 kubelet[2103]: E0712 00:36:47.147540 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.149489 kubelet[2103]: E0712 00:36:47.147659 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.149489 kubelet[2103]: W0712 00:36:47.147667 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.149489 kubelet[2103]: E0712 00:36:47.147674 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.149489 kubelet[2103]: E0712 00:36:47.147822 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.149489 kubelet[2103]: W0712 00:36:47.147829 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.149489 kubelet[2103]: E0712 00:36:47.147836 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.149489 kubelet[2103]: E0712 00:36:47.147982 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.149489 kubelet[2103]: W0712 00:36:47.147990 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.149489 kubelet[2103]: E0712 00:36:47.147997 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.163140 env[1324]: time="2025-07-12T00:36:47.163049456Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:36:47.163275 env[1324]: time="2025-07-12T00:36:47.163156796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:36:47.163275 env[1324]: time="2025-07-12T00:36:47.163183720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:36:47.164497 env[1324]: time="2025-07-12T00:36:47.163432325Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e695b5190d8a1747594a63426732c05c58a739dda77771b177d52ce5cde4f82 pid=2621 runtime=io.containerd.runc.v2 Jul 12 00:36:47.239465 env[1324]: time="2025-07-12T00:36:47.239134595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kxbkw,Uid:78777352-9861-450c-adfe-86e885a34db3,Namespace:calico-system,Attempt:0,} returns sandbox id \"9e695b5190d8a1747594a63426732c05c58a739dda77771b177d52ce5cde4f82\"" Jul 12 00:36:47.248572 kubelet[2103]: E0712 00:36:47.247818 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.248572 kubelet[2103]: W0712 00:36:47.247837 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.248572 kubelet[2103]: E0712 00:36:47.247854 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.248572 kubelet[2103]: E0712 00:36:47.248028 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.248572 kubelet[2103]: W0712 00:36:47.248034 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.248572 kubelet[2103]: E0712 00:36:47.248046 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.248572 kubelet[2103]: E0712 00:36:47.248263 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.248572 kubelet[2103]: W0712 00:36:47.248271 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.248572 kubelet[2103]: E0712 00:36:47.248285 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.248572 kubelet[2103]: E0712 00:36:47.248459 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.248896 kubelet[2103]: W0712 00:36:47.248469 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.248896 kubelet[2103]: E0712 00:36:47.248483 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.248896 kubelet[2103]: E0712 00:36:47.248632 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.248896 kubelet[2103]: W0712 00:36:47.248639 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.248896 kubelet[2103]: E0712 00:36:47.248646 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.248896 kubelet[2103]: E0712 00:36:47.248799 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.248896 kubelet[2103]: W0712 00:36:47.248806 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.248896 kubelet[2103]: E0712 00:36:47.248814 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.249065 kubelet[2103]: E0712 00:36:47.248939 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.249065 kubelet[2103]: W0712 00:36:47.248946 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.249065 kubelet[2103]: E0712 00:36:47.248953 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.249065 kubelet[2103]: E0712 00:36:47.249061 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.249065 kubelet[2103]: W0712 00:36:47.249067 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.249171 kubelet[2103]: E0712 00:36:47.249074 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.249378 kubelet[2103]: E0712 00:36:47.249365 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.249447 kubelet[2103]: W0712 00:36:47.249379 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.249534 kubelet[2103]: E0712 00:36:47.249497 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.254201 kubelet[2103]: E0712 00:36:47.249587 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.254201 kubelet[2103]: W0712 00:36:47.249597 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.254201 kubelet[2103]: E0712 00:36:47.249740 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.254201 kubelet[2103]: E0712 00:36:47.249875 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.254201 kubelet[2103]: W0712 00:36:47.249884 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.254201 kubelet[2103]: E0712 00:36:47.249961 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.254201 kubelet[2103]: E0712 00:36:47.250025 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.254201 kubelet[2103]: W0712 00:36:47.250031 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.254201 kubelet[2103]: E0712 00:36:47.250109 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.254201 kubelet[2103]: E0712 00:36:47.250168 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.254545 kubelet[2103]: W0712 00:36:47.250174 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.254545 kubelet[2103]: E0712 00:36:47.250184 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.254545 kubelet[2103]: E0712 00:36:47.250486 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.254545 kubelet[2103]: W0712 00:36:47.250497 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.254545 kubelet[2103]: E0712 00:36:47.250507 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.254545 kubelet[2103]: E0712 00:36:47.250672 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.254545 kubelet[2103]: W0712 00:36:47.250680 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.254545 kubelet[2103]: E0712 00:36:47.250689 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.254545 kubelet[2103]: E0712 00:36:47.250919 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.254545 kubelet[2103]: W0712 00:36:47.250928 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.254747 kubelet[2103]: E0712 00:36:47.251005 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.254747 kubelet[2103]: E0712 00:36:47.251080 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.254747 kubelet[2103]: W0712 00:36:47.251088 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.254747 kubelet[2103]: E0712 00:36:47.251176 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.254747 kubelet[2103]: E0712 00:36:47.251226 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.254747 kubelet[2103]: W0712 00:36:47.251280 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.254747 kubelet[2103]: E0712 00:36:47.251360 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.254747 kubelet[2103]: E0712 00:36:47.251533 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.254747 kubelet[2103]: W0712 00:36:47.251543 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.254747 kubelet[2103]: E0712 00:36:47.251656 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.254942 kubelet[2103]: E0712 00:36:47.251774 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.254942 kubelet[2103]: W0712 00:36:47.251783 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.254942 kubelet[2103]: E0712 00:36:47.251805 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.254942 kubelet[2103]: E0712 00:36:47.252003 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.254942 kubelet[2103]: W0712 00:36:47.252012 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.254942 kubelet[2103]: E0712 00:36:47.252023 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.254942 kubelet[2103]: E0712 00:36:47.252399 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.254942 kubelet[2103]: W0712 00:36:47.252411 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.254942 kubelet[2103]: E0712 00:36:47.252423 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.254942 kubelet[2103]: E0712 00:36:47.252601 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.255158 kubelet[2103]: W0712 00:36:47.252611 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.255158 kubelet[2103]: E0712 00:36:47.252667 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.256691 kubelet[2103]: E0712 00:36:47.256454 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.256691 kubelet[2103]: W0712 00:36:47.256469 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.256691 kubelet[2103]: E0712 00:36:47.256482 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.256916 kubelet[2103]: E0712 00:36:47.256868 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.256916 kubelet[2103]: W0712 00:36:47.256880 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.256916 kubelet[2103]: E0712 00:36:47.256891 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.262767 kubelet[2103]: E0712 00:36:47.262751 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:47.262767 kubelet[2103]: W0712 00:36:47.262765 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:47.262865 kubelet[2103]: E0712 00:36:47.262777 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:47.638000 audit[2681]: NETFILTER_CFG table=filter:97 family=2 entries=20 op=nft_register_rule pid=2681 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:36:47.641913 kernel: kauditd_printk_skb: 19 callbacks suppressed Jul 12 00:36:47.641995 kernel: audit: type=1325 audit(1752280607.638:283): table=filter:97 family=2 entries=20 op=nft_register_rule pid=2681 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:36:47.642018 kernel: audit: type=1300 audit(1752280607.638:283): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd06bf6d0 a2=0 a3=1 items=0 ppid=2213 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:47.638000 audit[2681]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd06bf6d0 a2=0 a3=1 items=0 ppid=2213 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:47.638000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:36:47.647566 kernel: audit: type=1327 audit(1752280607.638:283): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:36:47.653000 audit[2681]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2681 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:36:47.653000 audit[2681]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd06bf6d0 a2=0 a3=1 items=0 ppid=2213 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:47.659831 kernel: audit: type=1325 audit(1752280607.653:284): table=nat:98 family=2 entries=12 op=nft_register_rule pid=2681 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:36:47.659905 kernel: audit: type=1300 audit(1752280607.653:284): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd06bf6d0 a2=0 a3=1 items=0 ppid=2213 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:36:47.659925 kernel: audit: type=1327 audit(1752280607.653:284): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:36:47.653000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:36:47.983325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2648460411.mount: Deactivated successfully. Jul 12 00:36:48.405589 kubelet[2103]: E0712 00:36:48.405461 2103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79v58" podUID="6f21bec0-521a-455c-b964-ef73ea0151cf" Jul 12 00:36:48.728562 env[1324]: time="2025-07-12T00:36:48.728430506Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:48.730678 env[1324]: time="2025-07-12T00:36:48.730637088Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:48.732066 env[1324]: time="2025-07-12T00:36:48.732021568Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:48.733305 env[1324]: time="2025-07-12T00:36:48.733266063Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:48.733710 env[1324]: time="2025-07-12T00:36:48.733673134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 12 00:36:48.734930 env[1324]: time="2025-07-12T00:36:48.734780125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 12 00:36:48.747994 env[1324]: time="2025-07-12T00:36:48.747934721Z" level=info msg="CreateContainer within sandbox \"0ca02565f54980351f4a6d93325db3e96b345d64ffec20f799c18b6a7b513a80\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 12 00:36:48.758254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2059117727.mount: Deactivated successfully. Jul 12 00:36:48.759147 env[1324]: time="2025-07-12T00:36:48.759104174Z" level=info msg="CreateContainer within sandbox \"0ca02565f54980351f4a6d93325db3e96b345d64ffec20f799c18b6a7b513a80\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"03996e08a90e2461d999b3090161ba64cf13a298fba1b509a83ad0a9f7c364f7\"" Jul 12 00:36:48.760686 env[1324]: time="2025-07-12T00:36:48.760620836Z" level=info msg="StartContainer for \"03996e08a90e2461d999b3090161ba64cf13a298fba1b509a83ad0a9f7c364f7\"" Jul 12 00:36:48.832053 env[1324]: time="2025-07-12T00:36:48.832012508Z" level=info msg="StartContainer for \"03996e08a90e2461d999b3090161ba64cf13a298fba1b509a83ad0a9f7c364f7\" returns successfully" Jul 12 00:36:49.477656 kubelet[2103]: E0712 00:36:49.477622 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:49.499291 kubelet[2103]: I0712 00:36:49.499226 2103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5f69b79bf8-h5ck9" podStartSLOduration=1.772222062 podStartE2EDuration="3.49921005s" podCreationTimestamp="2025-07-12 00:36:46 +0000 UTC" firstStartedPulling="2025-07-12 00:36:47.007631989 +0000 UTC m=+21.682838887" lastFinishedPulling="2025-07-12 00:36:48.734619977 +0000 UTC m=+23.409826875" observedRunningTime="2025-07-12 00:36:49.497795256 +0000 UTC m=+24.173002154" watchObservedRunningTime="2025-07-12 00:36:49.49921005 +0000 UTC m=+24.174416948" Jul 12 00:36:49.558652 kubelet[2103]: E0712 00:36:49.558621 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.558652 kubelet[2103]: W0712 00:36:49.558645 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.558835 kubelet[2103]: E0712 00:36:49.558666 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.558881 kubelet[2103]: E0712 00:36:49.558865 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.558881 kubelet[2103]: W0712 00:36:49.558877 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.558952 kubelet[2103]: E0712 00:36:49.558885 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.559058 kubelet[2103]: E0712 00:36:49.559047 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.559058 kubelet[2103]: W0712 00:36:49.559057 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.559117 kubelet[2103]: E0712 00:36:49.559073 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.559215 kubelet[2103]: E0712 00:36:49.559205 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.559249 kubelet[2103]: W0712 00:36:49.559215 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.559249 kubelet[2103]: E0712 00:36:49.559231 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.559402 kubelet[2103]: E0712 00:36:49.559391 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.559402 kubelet[2103]: W0712 00:36:49.559402 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.559468 kubelet[2103]: E0712 00:36:49.559411 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.559565 kubelet[2103]: E0712 00:36:49.559556 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.559565 kubelet[2103]: W0712 00:36:49.559565 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.559635 kubelet[2103]: E0712 00:36:49.559572 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.559721 kubelet[2103]: E0712 00:36:49.559711 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.559721 kubelet[2103]: W0712 00:36:49.559721 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.559800 kubelet[2103]: E0712 00:36:49.559728 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.559896 kubelet[2103]: E0712 00:36:49.559885 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.559896 kubelet[2103]: W0712 00:36:49.559896 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.559964 kubelet[2103]: E0712 00:36:49.559904 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.560083 kubelet[2103]: E0712 00:36:49.560072 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.560118 kubelet[2103]: W0712 00:36:49.560083 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.560118 kubelet[2103]: E0712 00:36:49.560092 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.560237 kubelet[2103]: E0712 00:36:49.560227 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.560237 kubelet[2103]: W0712 00:36:49.560237 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.560308 kubelet[2103]: E0712 00:36:49.560244 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.560401 kubelet[2103]: E0712 00:36:49.560390 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.560401 kubelet[2103]: W0712 00:36:49.560400 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.560466 kubelet[2103]: E0712 00:36:49.560408 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.560555 kubelet[2103]: E0712 00:36:49.560545 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.560555 kubelet[2103]: W0712 00:36:49.560555 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.560619 kubelet[2103]: E0712 00:36:49.560563 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.560794 kubelet[2103]: E0712 00:36:49.560742 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.560794 kubelet[2103]: W0712 00:36:49.560765 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.560794 kubelet[2103]: E0712 00:36:49.560773 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.560938 kubelet[2103]: E0712 00:36:49.560927 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.560938 kubelet[2103]: W0712 00:36:49.560936 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.561008 kubelet[2103]: E0712 00:36:49.560944 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.561094 kubelet[2103]: E0712 00:36:49.561084 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.561094 kubelet[2103]: W0712 00:36:49.561094 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.561151 kubelet[2103]: E0712 00:36:49.561101 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.563547 kubelet[2103]: E0712 00:36:49.563528 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.563652 kubelet[2103]: W0712 00:36:49.563636 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.563718 kubelet[2103]: E0712 00:36:49.563706 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.564039 kubelet[2103]: E0712 00:36:49.564025 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.564116 kubelet[2103]: W0712 00:36:49.564104 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.564192 kubelet[2103]: E0712 00:36:49.564179 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.564483 kubelet[2103]: E0712 00:36:49.564458 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.564483 kubelet[2103]: W0712 00:36:49.564476 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.564583 kubelet[2103]: E0712 00:36:49.564493 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.564754 kubelet[2103]: E0712 00:36:49.564727 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.564754 kubelet[2103]: W0712 00:36:49.564740 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.564807 kubelet[2103]: E0712 00:36:49.564760 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.564915 kubelet[2103]: E0712 00:36:49.564904 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.564948 kubelet[2103]: W0712 00:36:49.564915 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.564948 kubelet[2103]: E0712 00:36:49.564928 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.565103 kubelet[2103]: E0712 00:36:49.565091 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.565103 kubelet[2103]: W0712 00:36:49.565102 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.565167 kubelet[2103]: E0712 00:36:49.565115 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.565514 kubelet[2103]: E0712 00:36:49.565490 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.565637 kubelet[2103]: W0712 00:36:49.565622 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.565713 kubelet[2103]: E0712 00:36:49.565698 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.565934 kubelet[2103]: E0712 00:36:49.565915 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.565934 kubelet[2103]: W0712 00:36:49.565930 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.566004 kubelet[2103]: E0712 00:36:49.565945 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.566094 kubelet[2103]: E0712 00:36:49.566082 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.566094 kubelet[2103]: W0712 00:36:49.566093 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.566147 kubelet[2103]: E0712 00:36:49.566105 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.566248 kubelet[2103]: E0712 00:36:49.566238 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.566274 kubelet[2103]: W0712 00:36:49.566248 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.566274 kubelet[2103]: E0712 00:36:49.566261 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.566433 kubelet[2103]: E0712 00:36:49.566423 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.566470 kubelet[2103]: W0712 00:36:49.566433 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.566470 kubelet[2103]: E0712 00:36:49.566445 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.566718 kubelet[2103]: E0712 00:36:49.566699 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.566769 kubelet[2103]: W0712 00:36:49.566717 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.566769 kubelet[2103]: E0712 00:36:49.566735 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.566930 kubelet[2103]: E0712 00:36:49.566914 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.566930 kubelet[2103]: W0712 00:36:49.566928 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.566995 kubelet[2103]: E0712 00:36:49.566944 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.567096 kubelet[2103]: E0712 00:36:49.567085 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.567132 kubelet[2103]: W0712 00:36:49.567098 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.567132 kubelet[2103]: E0712 00:36:49.567111 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.567278 kubelet[2103]: E0712 00:36:49.567268 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.567278 kubelet[2103]: W0712 00:36:49.567277 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.567344 kubelet[2103]: E0712 00:36:49.567289 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.567467 kubelet[2103]: E0712 00:36:49.567456 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.567507 kubelet[2103]: W0712 00:36:49.567467 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.567507 kubelet[2103]: E0712 00:36:49.567480 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.567724 kubelet[2103]: E0712 00:36:49.567711 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.567824 kubelet[2103]: W0712 00:36:49.567725 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.567824 kubelet[2103]: E0712 00:36:49.567741 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.567933 kubelet[2103]: E0712 00:36:49.567922 2103 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:36:49.567933 kubelet[2103]: W0712 00:36:49.567934 2103 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:36:49.567989 kubelet[2103]: E0712 00:36:49.567943 2103 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:36:49.847163 env[1324]: time="2025-07-12T00:36:49.847021991Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:49.849907 env[1324]: time="2025-07-12T00:36:49.849872582Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:49.851615 env[1324]: time="2025-07-12T00:36:49.851577264Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:49.853450 env[1324]: time="2025-07-12T00:36:49.853423290Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:49.854418 env[1324]: time="2025-07-12T00:36:49.854204019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 12 00:36:49.857318 env[1324]: time="2025-07-12T00:36:49.857238921Z" level=info msg="CreateContainer within sandbox \"9e695b5190d8a1747594a63426732c05c58a739dda77771b177d52ce5cde4f82\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 12 00:36:49.870816 env[1324]: time="2025-07-12T00:36:49.870764039Z" level=info msg="CreateContainer within sandbox \"9e695b5190d8a1747594a63426732c05c58a739dda77771b177d52ce5cde4f82\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"96240b6164b920404028f28a6012cf56e604986020d3bf0f8e50d3456309039b\"" Jul 12 00:36:49.871421 env[1324]: time="2025-07-12T00:36:49.871393983Z" level=info msg="StartContainer for \"96240b6164b920404028f28a6012cf56e604986020d3bf0f8e50d3456309039b\"" Jul 12 00:36:49.934172 env[1324]: time="2025-07-12T00:36:49.934127921Z" level=info msg="StartContainer for \"96240b6164b920404028f28a6012cf56e604986020d3bf0f8e50d3456309039b\" returns successfully" Jul 12 00:36:49.971454 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96240b6164b920404028f28a6012cf56e604986020d3bf0f8e50d3456309039b-rootfs.mount: Deactivated successfully. Jul 12 00:36:49.988325 env[1324]: time="2025-07-12T00:36:49.988279840Z" level=info msg="shim disconnected" id=96240b6164b920404028f28a6012cf56e604986020d3bf0f8e50d3456309039b Jul 12 00:36:49.988325 env[1324]: time="2025-07-12T00:36:49.988327968Z" level=warning msg="cleaning up after shim disconnected" id=96240b6164b920404028f28a6012cf56e604986020d3bf0f8e50d3456309039b namespace=k8s.io Jul 12 00:36:49.988560 env[1324]: time="2025-07-12T00:36:49.988338249Z" level=info msg="cleaning up dead shim" Jul 12 00:36:49.995262 env[1324]: time="2025-07-12T00:36:49.995227909Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:36:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2805 runtime=io.containerd.runc.v2\n" Jul 12 00:36:50.405336 kubelet[2103]: E0712 00:36:50.405289 2103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79v58" podUID="6f21bec0-521a-455c-b964-ef73ea0151cf" Jul 12 00:36:50.480525 kubelet[2103]: I0712 00:36:50.480498 2103 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:36:50.480848 kubelet[2103]: E0712 00:36:50.480781 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:50.481667 env[1324]: time="2025-07-12T00:36:50.481598439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 12 00:36:52.406200 kubelet[2103]: E0712 00:36:52.406138 2103 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79v58" podUID="6f21bec0-521a-455c-b964-ef73ea0151cf" Jul 12 00:36:53.519521 env[1324]: time="2025-07-12T00:36:53.519469961Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:53.523619 env[1324]: time="2025-07-12T00:36:53.521446156Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:53.524142 env[1324]: time="2025-07-12T00:36:53.524113009Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:53.525422 env[1324]: time="2025-07-12T00:36:53.525388186Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:36:53.526005 env[1324]: time="2025-07-12T00:36:53.525972868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 12 00:36:53.534423 env[1324]: time="2025-07-12T00:36:53.534287388Z" level=info msg="CreateContainer within sandbox \"9e695b5190d8a1747594a63426732c05c58a739dda77771b177d52ce5cde4f82\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 12 00:36:53.557270 env[1324]: time="2025-07-12T00:36:53.556810492Z" level=info msg="CreateContainer within sandbox \"9e695b5190d8a1747594a63426732c05c58a739dda77771b177d52ce5cde4f82\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9bc160904a04b47877d5563bb9e512e6f6693cfaaa7d60b9f4464c499937d31b\"" Jul 12 00:36:53.557617 env[1324]: time="2025-07-12T00:36:53.557547635Z" level=info msg="StartContainer for \"9bc160904a04b47877d5563bb9e512e6f6693cfaaa7d60b9f4464c499937d31b\"" Jul 12 00:36:53.722374 env[1324]: time="2025-07-12T00:36:53.722314589Z" level=info msg="StartContainer for \"9bc160904a04b47877d5563bb9e512e6f6693cfaaa7d60b9f4464c499937d31b\" returns successfully" Jul 12 00:36:54.190267 env[1324]: time="2025-07-12T00:36:54.190210578Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:36:54.194823 kubelet[2103]: I0712 00:36:54.194094 2103 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 12 00:36:54.217273 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bc160904a04b47877d5563bb9e512e6f6693cfaaa7d60b9f4464c499937d31b-rootfs.mount: Deactivated successfully. Jul 12 00:36:54.222640 env[1324]: time="2025-07-12T00:36:54.222596079Z" level=info msg="shim disconnected" id=9bc160904a04b47877d5563bb9e512e6f6693cfaaa7d60b9f4464c499937d31b Jul 12 00:36:54.222808 env[1324]: time="2025-07-12T00:36:54.222790145Z" level=warning msg="cleaning up after shim disconnected" id=9bc160904a04b47877d5563bb9e512e6f6693cfaaa7d60b9f4464c499937d31b namespace=k8s.io Jul 12 00:36:54.222885 env[1324]: time="2025-07-12T00:36:54.222870396Z" level=info msg="cleaning up dead shim" Jul 12 00:36:54.239039 env[1324]: time="2025-07-12T00:36:54.238996398Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:36:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2877 runtime=io.containerd.runc.v2\n" Jul 12 00:36:54.399113 kubelet[2103]: I0712 00:36:54.399058 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lln2p\" (UniqueName: \"kubernetes.io/projected/9c149c99-30db-4ff2-86be-58fb6e2d813a-kube-api-access-lln2p\") pod \"whisker-6d9765465-897pg\" (UID: \"9c149c99-30db-4ff2-86be-58fb6e2d813a\") " pod="calico-system/whisker-6d9765465-897pg" Jul 12 00:36:54.399296 kubelet[2103]: I0712 00:36:54.399123 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk9bq\" (UniqueName: \"kubernetes.io/projected/8eb4ca6c-aaff-4fe8-9f0e-771c33b4200d-kube-api-access-wk9bq\") pod \"calico-kube-controllers-7f46b5b9d6-92dnp\" (UID: \"8eb4ca6c-aaff-4fe8-9f0e-771c33b4200d\") " pod="calico-system/calico-kube-controllers-7f46b5b9d6-92dnp" Jul 12 00:36:54.399296 kubelet[2103]: I0712 00:36:54.399145 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sddp\" (UniqueName: \"kubernetes.io/projected/3888f770-0a64-4382-86fc-ba4105786dc9-kube-api-access-6sddp\") pod \"coredns-7c65d6cfc9-nkzk8\" (UID: \"3888f770-0a64-4382-86fc-ba4105786dc9\") " pod="kube-system/coredns-7c65d6cfc9-nkzk8" Jul 12 00:36:54.399296 kubelet[2103]: I0712 00:36:54.399219 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3fa0a624-ecf9-48dd-83d5-27860d361813-config-volume\") pod \"coredns-7c65d6cfc9-dfvjl\" (UID: \"3fa0a624-ecf9-48dd-83d5-27860d361813\") " pod="kube-system/coredns-7c65d6cfc9-dfvjl" Jul 12 00:36:54.399296 kubelet[2103]: I0712 00:36:54.399267 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc9cn\" (UniqueName: \"kubernetes.io/projected/fecfff6f-79d3-4090-96c1-83913da0527a-kube-api-access-zc9cn\") pod \"calico-apiserver-7d9dcbc845-6nfvq\" (UID: \"fecfff6f-79d3-4090-96c1-83913da0527a\") " pod="calico-apiserver/calico-apiserver-7d9dcbc845-6nfvq" Jul 12 00:36:54.399296 kubelet[2103]: I0712 00:36:54.399286 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/faaca0cb-1548-4256-82f4-00433e531079-calico-apiserver-certs\") pod \"calico-apiserver-7d9dcbc845-hgq6q\" (UID: \"faaca0cb-1548-4256-82f4-00433e531079\") " pod="calico-apiserver/calico-apiserver-7d9dcbc845-hgq6q" Jul 12 00:36:54.399471 kubelet[2103]: I0712 00:36:54.399303 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/118681c8-63c7-4aed-ae42-07c9da34ea65-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-6mdg7\" (UID: \"118681c8-63c7-4aed-ae42-07c9da34ea65\") " pod="calico-system/goldmane-58fd7646b9-6mdg7" Jul 12 00:36:54.399471 kubelet[2103]: I0712 00:36:54.399357 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/118681c8-63c7-4aed-ae42-07c9da34ea65-goldmane-key-pair\") pod \"goldmane-58fd7646b9-6mdg7\" (UID: \"118681c8-63c7-4aed-ae42-07c9da34ea65\") " pod="calico-system/goldmane-58fd7646b9-6mdg7" Jul 12 00:36:54.399471 kubelet[2103]: I0712 00:36:54.399376 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/118681c8-63c7-4aed-ae42-07c9da34ea65-config\") pod \"goldmane-58fd7646b9-6mdg7\" (UID: \"118681c8-63c7-4aed-ae42-07c9da34ea65\") " pod="calico-system/goldmane-58fd7646b9-6mdg7" Jul 12 00:36:54.399471 kubelet[2103]: I0712 00:36:54.399453 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c149c99-30db-4ff2-86be-58fb6e2d813a-whisker-ca-bundle\") pod \"whisker-6d9765465-897pg\" (UID: \"9c149c99-30db-4ff2-86be-58fb6e2d813a\") " pod="calico-system/whisker-6d9765465-897pg" Jul 12 00:36:54.399568 kubelet[2103]: I0712 00:36:54.399475 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkdx7\" (UniqueName: \"kubernetes.io/projected/118681c8-63c7-4aed-ae42-07c9da34ea65-kube-api-access-mkdx7\") pod \"goldmane-58fd7646b9-6mdg7\" (UID: \"118681c8-63c7-4aed-ae42-07c9da34ea65\") " pod="calico-system/goldmane-58fd7646b9-6mdg7" Jul 12 00:36:54.399568 kubelet[2103]: I0712 00:36:54.399537 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9c149c99-30db-4ff2-86be-58fb6e2d813a-whisker-backend-key-pair\") pod \"whisker-6d9765465-897pg\" (UID: \"9c149c99-30db-4ff2-86be-58fb6e2d813a\") " pod="calico-system/whisker-6d9765465-897pg" Jul 12 00:36:54.399568 kubelet[2103]: I0712 00:36:54.399554 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fecfff6f-79d3-4090-96c1-83913da0527a-calico-apiserver-certs\") pod \"calico-apiserver-7d9dcbc845-6nfvq\" (UID: \"fecfff6f-79d3-4090-96c1-83913da0527a\") " pod="calico-apiserver/calico-apiserver-7d9dcbc845-6nfvq" Jul 12 00:36:54.399636 kubelet[2103]: I0712 00:36:54.399602 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n655c\" (UniqueName: \"kubernetes.io/projected/3fa0a624-ecf9-48dd-83d5-27860d361813-kube-api-access-n655c\") pod \"coredns-7c65d6cfc9-dfvjl\" (UID: \"3fa0a624-ecf9-48dd-83d5-27860d361813\") " pod="kube-system/coredns-7c65d6cfc9-dfvjl" Jul 12 00:36:54.399636 kubelet[2103]: I0712 00:36:54.399620 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8eb4ca6c-aaff-4fe8-9f0e-771c33b4200d-tigera-ca-bundle\") pod \"calico-kube-controllers-7f46b5b9d6-92dnp\" (UID: \"8eb4ca6c-aaff-4fe8-9f0e-771c33b4200d\") " pod="calico-system/calico-kube-controllers-7f46b5b9d6-92dnp" Jul 12 00:36:54.399636 kubelet[2103]: I0712 00:36:54.399634 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3888f770-0a64-4382-86fc-ba4105786dc9-config-volume\") pod \"coredns-7c65d6cfc9-nkzk8\" (UID: \"3888f770-0a64-4382-86fc-ba4105786dc9\") " pod="kube-system/coredns-7c65d6cfc9-nkzk8" Jul 12 00:36:54.399708 kubelet[2103]: I0712 00:36:54.399678 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw6jl\" (UniqueName: \"kubernetes.io/projected/faaca0cb-1548-4256-82f4-00433e531079-kube-api-access-xw6jl\") pod \"calico-apiserver-7d9dcbc845-hgq6q\" (UID: \"faaca0cb-1548-4256-82f4-00433e531079\") " pod="calico-apiserver/calico-apiserver-7d9dcbc845-hgq6q" Jul 12 00:36:54.408697 env[1324]: time="2025-07-12T00:36:54.408654464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-79v58,Uid:6f21bec0-521a-455c-b964-ef73ea0151cf,Namespace:calico-system,Attempt:0,}" Jul 12 00:36:54.495110 env[1324]: time="2025-07-12T00:36:54.494987759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 12 00:36:54.552422 env[1324]: time="2025-07-12T00:36:54.549995653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d9dcbc845-6nfvq,Uid:fecfff6f-79d3-4090-96c1-83913da0527a,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:36:54.556669 kubelet[2103]: E0712 00:36:54.556086 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:54.556796 env[1324]: time="2025-07-12T00:36:54.556720955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dfvjl,Uid:3fa0a624-ecf9-48dd-83d5-27860d361813,Namespace:kube-system,Attempt:0,}" Jul 12 00:36:54.556989 kubelet[2103]: E0712 00:36:54.556890 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:36:54.557641 env[1324]: time="2025-07-12T00:36:54.557551146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nkzk8,Uid:3888f770-0a64-4382-86fc-ba4105786dc9,Namespace:kube-system,Attempt:0,}" Jul 12 00:36:54.568128 env[1324]: time="2025-07-12T00:36:54.568088599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d9765465-897pg,Uid:9c149c99-30db-4ff2-86be-58fb6e2d813a,Namespace:calico-system,Attempt:0,}" Jul 12 00:36:54.568276 env[1324]: time="2025-07-12T00:36:54.568088759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-6mdg7,Uid:118681c8-63c7-4aed-ae42-07c9da34ea65,Namespace:calico-system,Attempt:0,}" Jul 12 00:36:54.570642 env[1324]: time="2025-07-12T00:36:54.570610937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d9dcbc845-hgq6q,Uid:faaca0cb-1548-4256-82f4-00433e531079,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:36:54.572921 env[1324]: time="2025-07-12T00:36:54.572884802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f46b5b9d6-92dnp,Uid:8eb4ca6c-aaff-4fe8-9f0e-771c33b4200d,Namespace:calico-system,Attempt:0,}" Jul 12 00:36:54.595137 env[1324]: time="2025-07-12T00:36:54.595068136Z" level=error msg="Failed to destroy network for sandbox \"7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:54.597034 env[1324]: time="2025-07-12T00:36:54.596991074Z" level=error msg="encountered an error cleaning up failed sandbox \"7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:54.597090 env[1324]: time="2025-07-12T00:36:54.597057763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-79v58,Uid:6f21bec0-521a-455c-b964-ef73ea0151cf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:54.598075 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5-shm.mount: Deactivated successfully. Jul 12 00:36:54.600064 kubelet[2103]: E0712 00:36:54.600016 2103 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:54.600167 kubelet[2103]: E0712 00:36:54.600096 2103 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-79v58" Jul 12 00:36:54.600167 kubelet[2103]: E0712 00:36:54.600116 2103 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-79v58" Jul 12 00:36:54.601548 kubelet[2103]: E0712 00:36:54.600163 2103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-79v58_calico-system(6f21bec0-521a-455c-b964-ef73ea0151cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-79v58_calico-system(6f21bec0-521a-455c-b964-ef73ea0151cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-79v58" podUID="6f21bec0-521a-455c-b964-ef73ea0151cf" Jul 12 00:36:54.626146 env[1324]: time="2025-07-12T00:36:54.626094296Z" level=error msg="Failed to destroy network for sandbox \"37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:54.626660 env[1324]: time="2025-07-12T00:36:54.626624327Z" level=error msg="encountered an error cleaning up failed sandbox \"37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:54.626832 env[1324]: time="2025-07-12T00:36:54.626793270Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d9dcbc845-6nfvq,Uid:fecfff6f-79d3-4090-96c1-83913da0527a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:54.627206 kubelet[2103]: E0712 00:36:54.627147 2103 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:54.627306 kubelet[2103]: E0712 00:36:54.627209 2103 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d9dcbc845-6nfvq" Jul 12 00:36:54.627306 kubelet[2103]: E0712 00:36:54.627228 2103 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d9dcbc845-6nfvq" Jul 12 00:36:54.627306 kubelet[2103]: E0712 00:36:54.627269 2103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d9dcbc845-6nfvq_calico-apiserver(fecfff6f-79d3-4090-96c1-83913da0527a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d9dcbc845-6nfvq_calico-apiserver(fecfff6f-79d3-4090-96c1-83913da0527a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d9dcbc845-6nfvq" podUID="fecfff6f-79d3-4090-96c1-83913da0527a" Jul 12 00:36:54.689273 env[1324]: time="2025-07-12T00:36:54.689221119Z" level=error msg="Failed to destroy network for sandbox \"f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:54.689621 env[1324]: time="2025-07-12T00:36:54.689585848Z" level=error msg="encountered an error cleaning up failed sandbox \"f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:54.689665 env[1324]: time="2025-07-12T00:36:54.689635895Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dfvjl,Uid:3fa0a624-ecf9-48dd-83d5-27860d361813,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:54.691463 kubelet[2103]: E0712 00:36:54.691233 2103 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:54.691463 kubelet[2103]: E0712 00:36:54.691289 2103 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dfvjl" Jul 12 00:36:54.691463 kubelet[2103]: E0712 00:36:54.691311 2103 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dfvjl" Jul 12 00:36:54.693151 kubelet[2103]: E0712 00:36:54.691354 2103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dfvjl_kube-system(3fa0a624-ecf9-48dd-83d5-27860d361813)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dfvjl_kube-system(3fa0a624-ecf9-48dd-83d5-27860d361813)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dfvjl" podUID="3fa0a624-ecf9-48dd-83d5-27860d361813" Jul 12 00:36:54.709654 env[1324]: time="2025-07-12T00:36:54.709593050Z" level=error msg="Failed to destroy network for sandbox \"c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:54.709975 env[1324]: time="2025-07-12T00:36:54.709937337Z" level=error msg="encountered an error cleaning up failed sandbox \"c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:54.710036 env[1324]: time="2025-07-12T00:36:54.709990104Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nkzk8,Uid:3888f770-0a64-4382-86fc-ba4105786dc9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:54.710264 kubelet[2103]: E0712 00:36:54.710218 2103 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:54.710335 kubelet[2103]: E0712 00:36:54.710302 2103 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-nkzk8" Jul 12 00:36:54.710335 kubelet[2103]: E0712 00:36:54.710323 2103 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-nkzk8" Jul 12 00:36:54.710409 kubelet[2103]: E0712 00:36:54.710365 2103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-nkzk8_kube-system(3888f770-0a64-4382-86fc-ba4105786dc9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-nkzk8_kube-system(3888f770-0a64-4382-86fc-ba4105786dc9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-nkzk8" podUID="3888f770-0a64-4382-86fc-ba4105786dc9" Jul 12 00:36:54.710558 env[1324]: time="2025-07-12T00:36:54.710513134Z" level=error msg="Failed to destroy network for sandbox \"939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:54.712153 env[1324]: time="2025-07-12T00:36:54.711741538Z" level=error msg="encountered an error cleaning up failed sandbox \"939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:54.712153 env[1324]: time="2025-07-12T00:36:54.711795306Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d9765465-897pg,Uid:9c149c99-30db-4ff2-86be-58fb6e2d813a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:54.712288 kubelet[2103]: E0712 00:36:54.711949 2103 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:54.712288 kubelet[2103]: E0712 00:36:54.711992 2103 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d9765465-897pg" Jul 12 00:36:54.712288 kubelet[2103]: E0712 00:36:54.712020 2103 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d9765465-897pg" Jul 12 00:36:54.712412 kubelet[2103]: E0712 00:36:54.712051 2103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6d9765465-897pg_calico-system(9c149c99-30db-4ff2-86be-58fb6e2d813a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6d9765465-897pg_calico-system(9c149c99-30db-4ff2-86be-58fb6e2d813a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d9765465-897pg" podUID="9c149c99-30db-4ff2-86be-58fb6e2d813a" Jul 12 00:36:54.725649 env[1324]: time="2025-07-12T00:36:54.725579554Z" level=error msg="Failed to destroy network for sandbox \"0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:54.726002 env[1324]: time="2025-07-12T00:36:54.725962245Z" level=error msg="encountered an error cleaning up failed sandbox \"0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:54.726047 env[1324]: time="2025-07-12T00:36:54.726012692Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f46b5b9d6-92dnp,Uid:8eb4ca6c-aaff-4fe8-9f0e-771c33b4200d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:54.726256 kubelet[2103]: E0712 00:36:54.726216 2103 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:54.726317 kubelet[2103]: E0712 00:36:54.726272 2103 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f46b5b9d6-92dnp" Jul 12 00:36:54.726317 kubelet[2103]: E0712 00:36:54.726299 2103 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f46b5b9d6-92dnp" Jul 12 00:36:54.726375 kubelet[2103]: E0712 00:36:54.726334 2103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f46b5b9d6-92dnp_calico-system(8eb4ca6c-aaff-4fe8-9f0e-771c33b4200d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f46b5b9d6-92dnp_calico-system(8eb4ca6c-aaff-4fe8-9f0e-771c33b4200d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f46b5b9d6-92dnp" podUID="8eb4ca6c-aaff-4fe8-9f0e-771c33b4200d" Jul 12 00:36:54.731289 env[1324]: time="2025-07-12T00:36:54.731238352Z" level=error msg="Failed to destroy network for sandbox \"da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:54.731728 env[1324]: time="2025-07-12T00:36:54.731693533Z" level=error msg="encountered an error cleaning up failed sandbox \"da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:54.731852 env[1324]: time="2025-07-12T00:36:54.731825071Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d9dcbc845-hgq6q,Uid:faaca0cb-1548-4256-82f4-00433e531079,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:54.732140 kubelet[2103]: E0712 00:36:54.732090 2103 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:54.732215 kubelet[2103]: E0712 00:36:54.732146 2103 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d9dcbc845-hgq6q" Jul 12 00:36:54.732215 kubelet[2103]: E0712 00:36:54.732164 2103 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d9dcbc845-hgq6q" Jul 12 00:36:54.732266 kubelet[2103]: E0712 00:36:54.732200 2103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d9dcbc845-hgq6q_calico-apiserver(faaca0cb-1548-4256-82f4-00433e531079)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d9dcbc845-hgq6q_calico-apiserver(faaca0cb-1548-4256-82f4-00433e531079)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d9dcbc845-hgq6q" podUID="faaca0cb-1548-4256-82f4-00433e531079" Jul 12 00:36:54.736834 env[1324]: time="2025-07-12T00:36:54.736788617Z" level=error msg="Failed to destroy network for sandbox \"6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:54.737289 env[1324]: time="2025-07-12T00:36:54.737257199Z" level=error msg="encountered an error cleaning up failed sandbox \"6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:54.737545 env[1324]: time="2025-07-12T00:36:54.737500752Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-6mdg7,Uid:118681c8-63c7-4aed-ae42-07c9da34ea65,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:54.737828 kubelet[2103]: E0712 00:36:54.737795 2103 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:54.737887 kubelet[2103]: E0712 00:36:54.737841 2103 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-6mdg7" Jul 12 00:36:54.737887 kubelet[2103]: E0712 00:36:54.737856 2103 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-6mdg7" Jul 12 00:36:54.737944 kubelet[2103]: E0712 00:36:54.737891 2103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-6mdg7_calico-system(118681c8-63c7-4aed-ae42-07c9da34ea65)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-6mdg7_calico-system(118681c8-63c7-4aed-ae42-07c9da34ea65)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-6mdg7" podUID="118681c8-63c7-4aed-ae42-07c9da34ea65" Jul 12 00:36:55.495683 kubelet[2103]: I0712 00:36:55.495652 2103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" Jul 12 00:36:55.496610 env[1324]: time="2025-07-12T00:36:55.496570005Z" level=info msg="StopPodSandbox for \"f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7\"" Jul 12 00:36:55.498435 kubelet[2103]: I0712 00:36:55.498058 2103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" Jul 12 00:36:55.498995 env[1324]: time="2025-07-12T00:36:55.498598427Z" level=info msg="StopPodSandbox for \"da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13\"" Jul 12 00:36:55.500339 kubelet[2103]: I0712 00:36:55.499691 2103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" Jul 12 00:36:55.500440 env[1324]: time="2025-07-12T00:36:55.500419621Z" level=info msg="StopPodSandbox for \"939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273\"" Jul 12 00:36:55.501318 kubelet[2103]: I0712 00:36:55.501287 2103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" Jul 12 00:36:55.501796 env[1324]: time="2025-07-12T00:36:55.501760394Z" level=info msg="StopPodSandbox for \"37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61\"" Jul 12 00:36:55.502876 kubelet[2103]: I0712 00:36:55.502856 2103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" Jul 12 00:36:55.503337 env[1324]: time="2025-07-12T00:36:55.503310994Z" level=info msg="StopPodSandbox for \"0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc\"" Jul 12 00:36:55.505656 kubelet[2103]: I0712 00:36:55.505617 2103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" Jul 12 00:36:55.506174 env[1324]: time="2025-07-12T00:36:55.506037786Z" level=info msg="StopPodSandbox for \"7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5\"" Jul 12 00:36:55.508659 kubelet[2103]: I0712 00:36:55.508576 2103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" Jul 12 00:36:55.510531 env[1324]: time="2025-07-12T00:36:55.510443154Z" level=info msg="StopPodSandbox for \"c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587\"" Jul 12 00:36:55.512870 kubelet[2103]: I0712 00:36:55.512832 2103 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" Jul 12 00:36:55.515140 env[1324]: time="2025-07-12T00:36:55.514696902Z" level=info msg="StopPodSandbox for \"6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c\"" Jul 12 00:36:55.548732 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587-shm.mount: Deactivated successfully. Jul 12 00:36:55.548906 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61-shm.mount: Deactivated successfully. Jul 12 00:36:55.553443 env[1324]: time="2025-07-12T00:36:55.553375008Z" level=error msg="StopPodSandbox for \"da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13\" failed" error="failed to destroy network for sandbox \"da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:55.553747 kubelet[2103]: E0712 00:36:55.553652 2103 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" Jul 12 00:36:55.553747 kubelet[2103]: E0712 00:36:55.553714 2103 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13"} Jul 12 00:36:55.553835 kubelet[2103]: E0712 00:36:55.553770 2103 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"faaca0cb-1548-4256-82f4-00433e531079\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:36:55.553835 kubelet[2103]: E0712 00:36:55.553791 2103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"faaca0cb-1548-4256-82f4-00433e531079\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d9dcbc845-hgq6q" podUID="faaca0cb-1548-4256-82f4-00433e531079" Jul 12 00:36:55.561503 env[1324]: time="2025-07-12T00:36:55.561436408Z" level=error msg="StopPodSandbox for \"37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61\" failed" error="failed to destroy network for sandbox \"37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:55.561737 kubelet[2103]: E0712 00:36:55.561686 2103 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" Jul 12 00:36:55.561811 kubelet[2103]: E0712 00:36:55.561749 2103 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61"} Jul 12 00:36:55.561811 kubelet[2103]: E0712 00:36:55.561788 2103 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fecfff6f-79d3-4090-96c1-83913da0527a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:36:55.561894 kubelet[2103]: E0712 00:36:55.561812 2103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fecfff6f-79d3-4090-96c1-83913da0527a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d9dcbc845-6nfvq" podUID="fecfff6f-79d3-4090-96c1-83913da0527a" Jul 12 00:36:55.568303 env[1324]: time="2025-07-12T00:36:55.568230524Z" level=error msg="StopPodSandbox for \"7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5\" failed" error="failed to destroy network for sandbox \"7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:55.568536 env[1324]: time="2025-07-12T00:36:55.568475675Z" level=error msg="StopPodSandbox for \"0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc\" failed" error="failed to destroy network for sandbox \"0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:55.568797 kubelet[2103]: E0712 00:36:55.568581 2103 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" Jul 12 00:36:55.568797 kubelet[2103]: E0712 00:36:55.568655 2103 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5"} Jul 12 00:36:55.568797 kubelet[2103]: E0712 00:36:55.568690 2103 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6f21bec0-521a-455c-b964-ef73ea0151cf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:36:55.568797 kubelet[2103]: E0712 00:36:55.568679 2103 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" Jul 12 00:36:55.568974 kubelet[2103]: E0712 00:36:55.568714 2103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6f21bec0-521a-455c-b964-ef73ea0151cf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-79v58" podUID="6f21bec0-521a-455c-b964-ef73ea0151cf" Jul 12 00:36:55.568974 kubelet[2103]: E0712 00:36:55.568721 2103 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc"} Jul 12 00:36:55.568974 kubelet[2103]: E0712 00:36:55.568749 2103 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8eb4ca6c-aaff-4fe8-9f0e-771c33b4200d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:36:55.568974 kubelet[2103]: E0712 00:36:55.568768 2103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8eb4ca6c-aaff-4fe8-9f0e-771c33b4200d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f46b5b9d6-92dnp" podUID="8eb4ca6c-aaff-4fe8-9f0e-771c33b4200d" Jul 12 00:36:55.590061 env[1324]: time="2025-07-12T00:36:55.589985248Z" level=error msg="StopPodSandbox for \"939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273\" failed" error="failed to destroy network for sandbox \"939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:55.590415 kubelet[2103]: E0712 00:36:55.590261 2103 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" Jul 12 00:36:55.590415 kubelet[2103]: E0712 00:36:55.590314 2103 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273"} Jul 12 00:36:55.590415 kubelet[2103]: E0712 00:36:55.590350 2103 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c149c99-30db-4ff2-86be-58fb6e2d813a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:36:55.590415 kubelet[2103]: E0712 00:36:55.590371 2103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c149c99-30db-4ff2-86be-58fb6e2d813a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d9765465-897pg" podUID="9c149c99-30db-4ff2-86be-58fb6e2d813a" Jul 12 00:36:55.593307 env[1324]: time="2025-07-12T00:36:55.593247349Z" level=error msg="StopPodSandbox for \"f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7\" failed" error="failed to destroy network for sandbox \"f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:55.593491 kubelet[2103]: E0712 00:36:55.593456 2103 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" Jul 12 00:36:55.593547 kubelet[2103]: E0712 00:36:55.593498 2103 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7"} Jul 12 00:36:55.593547 kubelet[2103]: E0712 00:36:55.593526 2103 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3fa0a624-ecf9-48dd-83d5-27860d361813\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:36:55.593649 kubelet[2103]: E0712 00:36:55.593550 2103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3fa0a624-ecf9-48dd-83d5-27860d361813\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dfvjl" podUID="3fa0a624-ecf9-48dd-83d5-27860d361813" Jul 12 00:36:55.596427 env[1324]: time="2025-07-12T00:36:55.596355990Z" level=error msg="StopPodSandbox for \"6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c\" failed" error="failed to destroy network for sandbox \"6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:55.596526 env[1324]: time="2025-07-12T00:36:55.596361950Z" level=error msg="StopPodSandbox for \"c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587\" failed" error="failed to destroy network for sandbox \"c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:36:55.596754 kubelet[2103]: E0712 00:36:55.596668 2103 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" Jul 12 00:36:55.596754 kubelet[2103]: E0712 00:36:55.596683 2103 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" Jul 12 00:36:55.596754 kubelet[2103]: E0712 00:36:55.596703 2103 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587"} Jul 12 00:36:55.596754 kubelet[2103]: E0712 00:36:55.596716 2103 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c"} Jul 12 00:36:55.596754 kubelet[2103]: E0712 00:36:55.596744 2103 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"118681c8-63c7-4aed-ae42-07c9da34ea65\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:36:55.596942 kubelet[2103]: E0712 00:36:55.596805 2103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"118681c8-63c7-4aed-ae42-07c9da34ea65\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-6mdg7" podUID="118681c8-63c7-4aed-ae42-07c9da34ea65" Jul 12 00:36:55.597068 kubelet[2103]: E0712 00:36:55.597007 2103 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3888f770-0a64-4382-86fc-ba4105786dc9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:36:55.597068 kubelet[2103]: E0712 00:36:55.597036 2103 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3888f770-0a64-4382-86fc-ba4105786dc9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-nkzk8" podUID="3888f770-0a64-4382-86fc-ba4105786dc9" Jul 12 00:37:00.055401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2405317673.mount: Deactivated successfully. Jul 12 00:37:00.261344 env[1324]: time="2025-07-12T00:37:00.261297834Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:00.266122 env[1324]: time="2025-07-12T00:37:00.266078509Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:00.267779 env[1324]: time="2025-07-12T00:37:00.267744368Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:00.269447 env[1324]: time="2025-07-12T00:37:00.269418068Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:00.269863 env[1324]: time="2025-07-12T00:37:00.269827432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 12 00:37:00.290240 env[1324]: time="2025-07-12T00:37:00.290196024Z" level=info msg="CreateContainer within sandbox \"9e695b5190d8a1747594a63426732c05c58a739dda77771b177d52ce5cde4f82\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 12 00:37:00.310619 env[1324]: time="2025-07-12T00:37:00.310502930Z" level=info msg="CreateContainer within sandbox \"9e695b5190d8a1747594a63426732c05c58a739dda77771b177d52ce5cde4f82\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7aeac180dbba380dea906f97d4e6e2cafed6f29bc993be835edc0f38c1a64778\"" Jul 12 00:37:00.312752 env[1324]: time="2025-07-12T00:37:00.311696698Z" level=info msg="StartContainer for \"7aeac180dbba380dea906f97d4e6e2cafed6f29bc993be835edc0f38c1a64778\"" Jul 12 00:37:00.448394 env[1324]: time="2025-07-12T00:37:00.448281718Z" level=info msg="StartContainer for \"7aeac180dbba380dea906f97d4e6e2cafed6f29bc993be835edc0f38c1a64778\" returns successfully" Jul 12 00:37:00.543796 kubelet[2103]: I0712 00:37:00.543710 2103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kxbkw" podStartSLOduration=1.514122121 podStartE2EDuration="14.543693547s" podCreationTimestamp="2025-07-12 00:36:46 +0000 UTC" firstStartedPulling="2025-07-12 00:36:47.241457296 +0000 UTC m=+21.916664194" lastFinishedPulling="2025-07-12 00:37:00.271028762 +0000 UTC m=+34.946235620" observedRunningTime="2025-07-12 00:37:00.542928424 +0000 UTC m=+35.218135322" watchObservedRunningTime="2025-07-12 00:37:00.543693547 +0000 UTC m=+35.218900445" Jul 12 00:37:00.692775 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 12 00:37:00.692921 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 12 00:37:00.833159 env[1324]: time="2025-07-12T00:37:00.833104014Z" level=info msg="StopPodSandbox for \"939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273\"" Jul 12 00:37:01.152076 env[1324]: 2025-07-12 00:37:00.992 [INFO][3390] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" Jul 12 00:37:01.152076 env[1324]: 2025-07-12 00:37:00.993 [INFO][3390] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" iface="eth0" netns="/var/run/netns/cni-028cfbce-1d92-9919-38fa-cbdcdd177fb5" Jul 12 00:37:01.152076 env[1324]: 2025-07-12 00:37:00.994 [INFO][3390] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" iface="eth0" netns="/var/run/netns/cni-028cfbce-1d92-9919-38fa-cbdcdd177fb5" Jul 12 00:37:01.152076 env[1324]: 2025-07-12 00:37:00.994 [INFO][3390] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" iface="eth0" netns="/var/run/netns/cni-028cfbce-1d92-9919-38fa-cbdcdd177fb5" Jul 12 00:37:01.152076 env[1324]: 2025-07-12 00:37:00.995 [INFO][3390] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" Jul 12 00:37:01.152076 env[1324]: 2025-07-12 00:37:00.995 [INFO][3390] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" Jul 12 00:37:01.152076 env[1324]: 2025-07-12 00:37:01.135 [INFO][3399] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" HandleID="k8s-pod-network.939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" Workload="localhost-k8s-whisker--6d9765465--897pg-eth0" Jul 12 00:37:01.152076 env[1324]: 2025-07-12 00:37:01.136 [INFO][3399] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:37:01.152076 env[1324]: 2025-07-12 00:37:01.136 [INFO][3399] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:37:01.152076 env[1324]: 2025-07-12 00:37:01.147 [WARNING][3399] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" HandleID="k8s-pod-network.939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" Workload="localhost-k8s-whisker--6d9765465--897pg-eth0" Jul 12 00:37:01.152076 env[1324]: 2025-07-12 00:37:01.147 [INFO][3399] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" HandleID="k8s-pod-network.939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" Workload="localhost-k8s-whisker--6d9765465--897pg-eth0" Jul 12 00:37:01.152076 env[1324]: 2025-07-12 00:37:01.148 [INFO][3399] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:37:01.152076 env[1324]: 2025-07-12 00:37:01.150 [INFO][3390] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" Jul 12 00:37:01.154519 systemd[1]: run-netns-cni\x2d028cfbce\x2d1d92\x2d9919\x2d38fa\x2dcbdcdd177fb5.mount: Deactivated successfully. Jul 12 00:37:01.155314 env[1324]: time="2025-07-12T00:37:01.154504706Z" level=info msg="TearDown network for sandbox \"939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273\" successfully" Jul 12 00:37:01.155463 env[1324]: time="2025-07-12T00:37:01.155439283Z" level=info msg="StopPodSandbox for \"939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273\" returns successfully" Jul 12 00:37:01.246576 kubelet[2103]: I0712 00:37:01.246527 2103 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9c149c99-30db-4ff2-86be-58fb6e2d813a-whisker-backend-key-pair\") pod \"9c149c99-30db-4ff2-86be-58fb6e2d813a\" (UID: \"9c149c99-30db-4ff2-86be-58fb6e2d813a\") " Jul 12 00:37:01.246576 kubelet[2103]: I0712 00:37:01.246580 2103 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c149c99-30db-4ff2-86be-58fb6e2d813a-whisker-ca-bundle\") pod \"9c149c99-30db-4ff2-86be-58fb6e2d813a\" (UID: \"9c149c99-30db-4ff2-86be-58fb6e2d813a\") " Jul 12 00:37:01.246792 kubelet[2103]: I0712 00:37:01.246615 2103 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lln2p\" (UniqueName: \"kubernetes.io/projected/9c149c99-30db-4ff2-86be-58fb6e2d813a-kube-api-access-lln2p\") pod \"9c149c99-30db-4ff2-86be-58fb6e2d813a\" (UID: \"9c149c99-30db-4ff2-86be-58fb6e2d813a\") " Jul 12 00:37:01.249592 kubelet[2103]: I0712 00:37:01.249495 2103 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c149c99-30db-4ff2-86be-58fb6e2d813a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9c149c99-30db-4ff2-86be-58fb6e2d813a" (UID: "9c149c99-30db-4ff2-86be-58fb6e2d813a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 12 00:37:01.251371 kubelet[2103]: I0712 00:37:01.251256 2103 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c149c99-30db-4ff2-86be-58fb6e2d813a-kube-api-access-lln2p" (OuterVolumeSpecName: "kube-api-access-lln2p") pod "9c149c99-30db-4ff2-86be-58fb6e2d813a" (UID: "9c149c99-30db-4ff2-86be-58fb6e2d813a"). InnerVolumeSpecName "kube-api-access-lln2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:37:01.252692 kubelet[2103]: I0712 00:37:01.252652 2103 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c149c99-30db-4ff2-86be-58fb6e2d813a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9c149c99-30db-4ff2-86be-58fb6e2d813a" (UID: "9c149c99-30db-4ff2-86be-58fb6e2d813a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 12 00:37:01.253123 systemd[1]: var-lib-kubelet-pods-9c149c99\x2d30db\x2d4ff2\x2d86be\x2d58fb6e2d813a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 12 00:37:01.253327 systemd[1]: var-lib-kubelet-pods-9c149c99\x2d30db\x2d4ff2\x2d86be\x2d58fb6e2d813a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlln2p.mount: Deactivated successfully. Jul 12 00:37:01.347840 kubelet[2103]: I0712 00:37:01.347803 2103 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lln2p\" (UniqueName: \"kubernetes.io/projected/9c149c99-30db-4ff2-86be-58fb6e2d813a-kube-api-access-lln2p\") on node \"localhost\" DevicePath \"\"" Jul 12 00:37:01.348028 kubelet[2103]: I0712 00:37:01.348014 2103 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9c149c99-30db-4ff2-86be-58fb6e2d813a-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 12 00:37:01.348119 kubelet[2103]: I0712 00:37:01.348107 2103 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c149c99-30db-4ff2-86be-58fb6e2d813a-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 12 00:37:01.527567 kubelet[2103]: I0712 00:37:01.527397 2103 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:37:01.650085 kubelet[2103]: I0712 00:37:01.650025 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j82b\" (UniqueName: \"kubernetes.io/projected/cd53a686-a0c4-45aa-82d4-9c8a045b00c5-kube-api-access-5j82b\") pod \"whisker-54dcc86c67-wr4lz\" (UID: \"cd53a686-a0c4-45aa-82d4-9c8a045b00c5\") " pod="calico-system/whisker-54dcc86c67-wr4lz" Jul 12 00:37:01.650085 kubelet[2103]: I0712 00:37:01.650076 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd53a686-a0c4-45aa-82d4-9c8a045b00c5-whisker-ca-bundle\") pod \"whisker-54dcc86c67-wr4lz\" (UID: \"cd53a686-a0c4-45aa-82d4-9c8a045b00c5\") " pod="calico-system/whisker-54dcc86c67-wr4lz" Jul 12 00:37:01.650525 kubelet[2103]: I0712 00:37:01.650099 2103 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cd53a686-a0c4-45aa-82d4-9c8a045b00c5-whisker-backend-key-pair\") pod \"whisker-54dcc86c67-wr4lz\" (UID: \"cd53a686-a0c4-45aa-82d4-9c8a045b00c5\") " pod="calico-system/whisker-54dcc86c67-wr4lz" Jul 12 00:37:01.879274 env[1324]: time="2025-07-12T00:37:01.879046711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54dcc86c67-wr4lz,Uid:cd53a686-a0c4-45aa-82d4-9c8a045b00c5,Namespace:calico-system,Attempt:0,}" Jul 12 00:37:02.024558 systemd-networkd[1098]: cali783a088ab54: Link UP Jul 12 00:37:02.026719 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 12 00:37:02.026863 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali783a088ab54: link becomes ready Jul 12 00:37:02.026915 systemd-networkd[1098]: cali783a088ab54: Gained carrier Jul 12 00:37:02.040469 env[1324]: 2025-07-12 00:37:01.912 [INFO][3422] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 12 00:37:02.040469 env[1324]: 2025-07-12 00:37:01.929 [INFO][3422] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--54dcc86c67--wr4lz-eth0 whisker-54dcc86c67- calico-system cd53a686-a0c4-45aa-82d4-9c8a045b00c5 882 0 2025-07-12 00:37:01 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:54dcc86c67 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-54dcc86c67-wr4lz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali783a088ab54 [] [] }} ContainerID="6a5da03f3b78cce9216c37a522c9cfcbe82ccb2c5e0a7c9d81af103aec349343" Namespace="calico-system" Pod="whisker-54dcc86c67-wr4lz" WorkloadEndpoint="localhost-k8s-whisker--54dcc86c67--wr4lz-" Jul 12 00:37:02.040469 env[1324]: 2025-07-12 00:37:01.929 [INFO][3422] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6a5da03f3b78cce9216c37a522c9cfcbe82ccb2c5e0a7c9d81af103aec349343" Namespace="calico-system" Pod="whisker-54dcc86c67-wr4lz" WorkloadEndpoint="localhost-k8s-whisker--54dcc86c67--wr4lz-eth0" Jul 12 00:37:02.040469 env[1324]: 2025-07-12 00:37:01.959 [INFO][3437] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a5da03f3b78cce9216c37a522c9cfcbe82ccb2c5e0a7c9d81af103aec349343" HandleID="k8s-pod-network.6a5da03f3b78cce9216c37a522c9cfcbe82ccb2c5e0a7c9d81af103aec349343" Workload="localhost-k8s-whisker--54dcc86c67--wr4lz-eth0" Jul 12 00:37:02.040469 env[1324]: 2025-07-12 00:37:01.959 [INFO][3437] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6a5da03f3b78cce9216c37a522c9cfcbe82ccb2c5e0a7c9d81af103aec349343" HandleID="k8s-pod-network.6a5da03f3b78cce9216c37a522c9cfcbe82ccb2c5e0a7c9d81af103aec349343" Workload="localhost-k8s-whisker--54dcc86c67--wr4lz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c570), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-54dcc86c67-wr4lz", "timestamp":"2025-07-12 00:37:01.959342432 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:37:02.040469 env[1324]: 2025-07-12 00:37:01.963 [INFO][3437] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:37:02.040469 env[1324]: 2025-07-12 00:37:01.963 [INFO][3437] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:37:02.040469 env[1324]: 2025-07-12 00:37:01.963 [INFO][3437] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:37:02.040469 env[1324]: 2025-07-12 00:37:01.976 [INFO][3437] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6a5da03f3b78cce9216c37a522c9cfcbe82ccb2c5e0a7c9d81af103aec349343" host="localhost" Jul 12 00:37:02.040469 env[1324]: 2025-07-12 00:37:01.984 [INFO][3437] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:37:02.040469 env[1324]: 2025-07-12 00:37:01.991 [INFO][3437] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:37:02.040469 env[1324]: 2025-07-12 00:37:01.993 [INFO][3437] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:37:02.040469 env[1324]: 2025-07-12 00:37:01.995 [INFO][3437] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:37:02.040469 env[1324]: 2025-07-12 00:37:01.995 [INFO][3437] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6a5da03f3b78cce9216c37a522c9cfcbe82ccb2c5e0a7c9d81af103aec349343" host="localhost" Jul 12 00:37:02.040469 env[1324]: 2025-07-12 00:37:01.997 [INFO][3437] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6a5da03f3b78cce9216c37a522c9cfcbe82ccb2c5e0a7c9d81af103aec349343 Jul 12 00:37:02.040469 env[1324]: 2025-07-12 00:37:02.004 [INFO][3437] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6a5da03f3b78cce9216c37a522c9cfcbe82ccb2c5e0a7c9d81af103aec349343" host="localhost" Jul 12 00:37:02.040469 env[1324]: 2025-07-12 00:37:02.009 [INFO][3437] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.6a5da03f3b78cce9216c37a522c9cfcbe82ccb2c5e0a7c9d81af103aec349343" host="localhost" Jul 12 00:37:02.040469 env[1324]: 2025-07-12 00:37:02.009 [INFO][3437] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.6a5da03f3b78cce9216c37a522c9cfcbe82ccb2c5e0a7c9d81af103aec349343" host="localhost" Jul 12 00:37:02.040469 env[1324]: 2025-07-12 00:37:02.009 [INFO][3437] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:37:02.040469 env[1324]: 2025-07-12 00:37:02.009 [INFO][3437] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="6a5da03f3b78cce9216c37a522c9cfcbe82ccb2c5e0a7c9d81af103aec349343" HandleID="k8s-pod-network.6a5da03f3b78cce9216c37a522c9cfcbe82ccb2c5e0a7c9d81af103aec349343" Workload="localhost-k8s-whisker--54dcc86c67--wr4lz-eth0" Jul 12 00:37:02.041036 env[1324]: 2025-07-12 00:37:02.011 [INFO][3422] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6a5da03f3b78cce9216c37a522c9cfcbe82ccb2c5e0a7c9d81af103aec349343" Namespace="calico-system" Pod="whisker-54dcc86c67-wr4lz" WorkloadEndpoint="localhost-k8s-whisker--54dcc86c67--wr4lz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--54dcc86c67--wr4lz-eth0", GenerateName:"whisker-54dcc86c67-", Namespace:"calico-system", SelfLink:"", UID:"cd53a686-a0c4-45aa-82d4-9c8a045b00c5", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 37, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54dcc86c67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-54dcc86c67-wr4lz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali783a088ab54", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:37:02.041036 env[1324]: 2025-07-12 00:37:02.011 [INFO][3422] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="6a5da03f3b78cce9216c37a522c9cfcbe82ccb2c5e0a7c9d81af103aec349343" Namespace="calico-system" Pod="whisker-54dcc86c67-wr4lz" WorkloadEndpoint="localhost-k8s-whisker--54dcc86c67--wr4lz-eth0" Jul 12 00:37:02.041036 env[1324]: 2025-07-12 00:37:02.011 [INFO][3422] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali783a088ab54 ContainerID="6a5da03f3b78cce9216c37a522c9cfcbe82ccb2c5e0a7c9d81af103aec349343" Namespace="calico-system" Pod="whisker-54dcc86c67-wr4lz" WorkloadEndpoint="localhost-k8s-whisker--54dcc86c67--wr4lz-eth0" Jul 12 00:37:02.041036 env[1324]: 2025-07-12 00:37:02.027 [INFO][3422] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a5da03f3b78cce9216c37a522c9cfcbe82ccb2c5e0a7c9d81af103aec349343" Namespace="calico-system" Pod="whisker-54dcc86c67-wr4lz" WorkloadEndpoint="localhost-k8s-whisker--54dcc86c67--wr4lz-eth0" Jul 12 00:37:02.041036 env[1324]: 2025-07-12 00:37:02.030 [INFO][3422] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6a5da03f3b78cce9216c37a522c9cfcbe82ccb2c5e0a7c9d81af103aec349343" Namespace="calico-system" Pod="whisker-54dcc86c67-wr4lz" WorkloadEndpoint="localhost-k8s-whisker--54dcc86c67--wr4lz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--54dcc86c67--wr4lz-eth0", GenerateName:"whisker-54dcc86c67-", Namespace:"calico-system", SelfLink:"", UID:"cd53a686-a0c4-45aa-82d4-9c8a045b00c5", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 37, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54dcc86c67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a5da03f3b78cce9216c37a522c9cfcbe82ccb2c5e0a7c9d81af103aec349343", Pod:"whisker-54dcc86c67-wr4lz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali783a088ab54", MAC:"aa:c3:32:e1:ce:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:37:02.041036 env[1324]: 2025-07-12 00:37:02.038 [INFO][3422] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6a5da03f3b78cce9216c37a522c9cfcbe82ccb2c5e0a7c9d81af103aec349343" Namespace="calico-system" Pod="whisker-54dcc86c67-wr4lz" WorkloadEndpoint="localhost-k8s-whisker--54dcc86c67--wr4lz-eth0" Jul 12 00:37:02.052064 env[1324]: time="2025-07-12T00:37:02.051979909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:37:02.052064 env[1324]: time="2025-07-12T00:37:02.052031634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:37:02.052064 env[1324]: time="2025-07-12T00:37:02.052041995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:37:02.052319 env[1324]: time="2025-07-12T00:37:02.052274459Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a5da03f3b78cce9216c37a522c9cfcbe82ccb2c5e0a7c9d81af103aec349343 pid=3461 runtime=io.containerd.runc.v2 Jul 12 00:37:02.084410 systemd-resolved[1238]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:37:02.124000 audit[3531]: AVC avc: denied { write } for pid=3531 comm="tee" name="fd" dev="proc" ino=19048 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 12 00:37:02.124000 audit[3531]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff107a7ed a2=241 a3=1b6 items=1 ppid=3499 pid=3531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:02.131030 kernel: audit: type=1400 audit(1752280622.124:285): avc: denied { write } for pid=3531 comm="tee" name="fd" dev="proc" ino=19048 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 12 00:37:02.131119 kernel: audit: type=1300 audit(1752280622.124:285): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff107a7ed a2=241 a3=1b6 items=1 ppid=3499 pid=3531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:02.124000 audit: CWD cwd="/etc/service/enabled/felix/log" Jul 12 00:37:02.132582 kernel: audit: type=1307 audit(1752280622.124:285): cwd="/etc/service/enabled/felix/log" Jul 12 00:37:02.124000 audit: PATH item=0 name="/dev/fd/63" inode=19892 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:37:02.135554 kernel: audit: type=1302 audit(1752280622.124:285): item=0 name="/dev/fd/63" inode=19892 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:37:02.124000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 12 00:37:02.147188 kernel: audit: type=1327 audit(1752280622.124:285): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 12 00:37:02.143000 audit[3542]: AVC avc: denied { write } for pid=3542 comm="tee" name="fd" dev="proc" ino=19915 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 12 00:37:02.149773 kernel: audit: type=1400 audit(1752280622.143:286): avc: denied { write } for pid=3542 comm="tee" name="fd" dev="proc" ino=19915 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 12 00:37:02.143000 audit[3542]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcda7f7dd a2=241 a3=1b6 items=1 ppid=3502 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:02.150462 env[1324]: time="2025-07-12T00:37:02.150425197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54dcc86c67-wr4lz,Uid:cd53a686-a0c4-45aa-82d4-9c8a045b00c5,Namespace:calico-system,Attempt:0,} returns sandbox id \"6a5da03f3b78cce9216c37a522c9cfcbe82ccb2c5e0a7c9d81af103aec349343\"" Jul 12 00:37:02.153596 kernel: audit: type=1300 audit(1752280622.143:286): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcda7f7dd a2=241 a3=1b6 items=1 ppid=3502 pid=3542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:02.143000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jul 12 00:37:02.154885 kernel: audit: type=1307 audit(1752280622.143:286): cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jul 12 00:37:02.143000 audit: PATH item=0 name="/dev/fd/63" inode=19908 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:37:02.157868 kernel: audit: type=1302 audit(1752280622.143:286): item=0 name="/dev/fd/63" inode=19908 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:37:02.157934 kernel: audit: type=1327 audit(1752280622.143:286): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 12 00:37:02.143000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 12 00:37:02.160622 env[1324]: time="2025-07-12T00:37:02.160582062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 12 00:37:02.161000 audit[3546]: AVC avc: denied { write } for pid=3546 comm="tee" name="fd" dev="proc" ino=20491 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 12 00:37:02.161000 audit[3546]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd2d8d7ee a2=241 a3=1b6 items=1 ppid=3495 pid=3546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:02.161000 audit: CWD cwd="/etc/service/enabled/bird/log" Jul 12 00:37:02.161000 audit: PATH item=0 name="/dev/fd/63" inode=19909 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:37:02.161000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 12 00:37:02.164000 audit[3580]: AVC avc: denied { write } for pid=3580 comm="tee" name="fd" dev="proc" ino=20495 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 12 00:37:02.164000 audit[3580]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe1ad67ef a2=241 a3=1b6 items=1 ppid=3503 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:02.164000 audit: CWD cwd="/etc/service/enabled/cni/log" Jul 12 00:37:02.164000 audit: PATH item=0 name="/dev/fd/63" inode=18131 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:37:02.164000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 12 00:37:02.176000 audit[3568]: AVC avc: denied { write } for pid=3568 comm="tee" name="fd" dev="proc" ino=19928 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 12 00:37:02.177000 audit[3581]: AVC avc: denied { write } for pid=3581 comm="tee" name="fd" dev="proc" ino=20500 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 12 00:37:02.177000 audit[3581]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffffb8b97de a2=241 a3=1b6 items=1 ppid=3511 pid=3581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:02.177000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jul 12 00:37:02.177000 audit: PATH item=0 name="/dev/fd/63" inode=19066 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:37:02.177000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 12 00:37:02.176000 audit[3568]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffdda437ed a2=241 a3=1b6 items=1 ppid=3497 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:02.176000 audit: CWD cwd="/etc/service/enabled/confd/log" Jul 12 00:37:02.176000 audit: PATH item=0 name="/dev/fd/63" inode=18127 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:37:02.176000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 12 00:37:02.185000 audit[3577]: AVC avc: denied { write } for pid=3577 comm="tee" name="fd" dev="proc" ino=19932 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 12 00:37:02.185000 audit[3577]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcf9347ed a2=241 a3=1b6 items=1 ppid=3506 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:02.185000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jul 12 00:37:02.185000 audit: PATH item=0 name="/dev/fd/63" inode=19925 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:37:02.185000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 12 00:37:03.346545 systemd-networkd[1098]: cali783a088ab54: Gained IPv6LL Jul 12 00:37:03.407162 kubelet[2103]: I0712 00:37:03.407122 2103 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c149c99-30db-4ff2-86be-58fb6e2d813a" path="/var/lib/kubelet/pods/9c149c99-30db-4ff2-86be-58fb6e2d813a/volumes" Jul 12 00:37:03.491419 env[1324]: time="2025-07-12T00:37:03.491361702Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:03.492753 env[1324]: time="2025-07-12T00:37:03.492720675Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:03.494332 env[1324]: time="2025-07-12T00:37:03.494297389Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:03.495749 env[1324]: time="2025-07-12T00:37:03.495717528Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:03.496369 env[1324]: time="2025-07-12T00:37:03.496341669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 12 00:37:03.499309 env[1324]: time="2025-07-12T00:37:03.498599810Z" level=info msg="CreateContainer within sandbox \"6a5da03f3b78cce9216c37a522c9cfcbe82ccb2c5e0a7c9d81af103aec349343\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 12 00:37:03.514185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount711227733.mount: Deactivated successfully. Jul 12 00:37:03.521761 env[1324]: time="2025-07-12T00:37:03.521701509Z" level=info msg="CreateContainer within sandbox \"6a5da03f3b78cce9216c37a522c9cfcbe82ccb2c5e0a7c9d81af103aec349343\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"37a5125147dab2692d408baa55724f23ce5eafa4327b243d56d46890e6ad5180\"" Jul 12 00:37:03.522565 env[1324]: time="2025-07-12T00:37:03.522528870Z" level=info msg="StartContainer for \"37a5125147dab2692d408baa55724f23ce5eafa4327b243d56d46890e6ad5180\"" Jul 12 00:37:03.697046 env[1324]: time="2025-07-12T00:37:03.697002049Z" level=info msg="StartContainer for \"37a5125147dab2692d408baa55724f23ce5eafa4327b243d56d46890e6ad5180\" returns successfully" Jul 12 00:37:03.699300 env[1324]: time="2025-07-12T00:37:03.699264711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 12 00:37:05.481303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2177675903.mount: Deactivated successfully. Jul 12 00:37:05.580119 env[1324]: time="2025-07-12T00:37:05.579992657Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:05.589526 env[1324]: time="2025-07-12T00:37:05.589450489Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:05.602705 env[1324]: time="2025-07-12T00:37:05.602659586Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:05.616043 env[1324]: time="2025-07-12T00:37:05.615991256Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:05.616858 env[1324]: time="2025-07-12T00:37:05.616813571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 12 00:37:05.619103 env[1324]: time="2025-07-12T00:37:05.619046897Z" level=info msg="CreateContainer within sandbox \"6a5da03f3b78cce9216c37a522c9cfcbe82ccb2c5e0a7c9d81af103aec349343\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 12 00:37:05.656894 env[1324]: time="2025-07-12T00:37:05.656844622Z" level=info msg="CreateContainer within sandbox \"6a5da03f3b78cce9216c37a522c9cfcbe82ccb2c5e0a7c9d81af103aec349343\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"234bdb29adde1f07f5f3d3fec943243447b3656c62a519533b709b4ca98de068\"" Jul 12 00:37:05.657464 env[1324]: time="2025-07-12T00:37:05.657435117Z" level=info msg="StartContainer for \"234bdb29adde1f07f5f3d3fec943243447b3656c62a519533b709b4ca98de068\"" Jul 12 00:37:05.750908 env[1324]: time="2025-07-12T00:37:05.750793324Z" level=info msg="StartContainer for \"234bdb29adde1f07f5f3d3fec943243447b3656c62a519533b709b4ca98de068\" returns successfully" Jul 12 00:37:06.128253 systemd[1]: Started sshd@7-10.0.0.111:22-10.0.0.1:39268.service. Jul 12 00:37:06.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.111:22-10.0.0.1:39268 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:06.182000 audit[3748]: USER_ACCT pid=3748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:06.183901 sshd[3748]: Accepted publickey for core from 10.0.0.1 port 39268 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:37:06.183000 audit[3748]: CRED_ACQ pid=3748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:06.183000 audit[3748]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc859cdc0 a2=3 a3=1 items=0 ppid=1 pid=3748 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:06.183000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:37:06.185211 sshd[3748]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:37:06.189372 systemd-logind[1309]: New session 8 of user core. Jul 12 00:37:06.190351 systemd[1]: Started session-8.scope. Jul 12 00:37:06.193000 audit[3748]: USER_START pid=3748 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:06.194000 audit[3752]: CRED_ACQ pid=3752 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:06.370285 sshd[3748]: pam_unix(sshd:session): session closed for user core Jul 12 00:37:06.369000 audit[3748]: USER_END pid=3748 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:06.370000 audit[3748]: CRED_DISP pid=3748 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:06.374183 systemd[1]: sshd@7-10.0.0.111:22-10.0.0.1:39268.service: Deactivated successfully. Jul 12 00:37:06.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.111:22-10.0.0.1:39268 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:06.375432 systemd-logind[1309]: Session 8 logged out. Waiting for processes to exit. Jul 12 00:37:06.375481 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 00:37:06.376339 systemd-logind[1309]: Removed session 8. Jul 12 00:37:06.553251 kubelet[2103]: I0712 00:37:06.551938 2103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-54dcc86c67-wr4lz" podStartSLOduration=2.094155821 podStartE2EDuration="5.551914436s" podCreationTimestamp="2025-07-12 00:37:01 +0000 UTC" firstStartedPulling="2025-07-12 00:37:02.160050368 +0000 UTC m=+36.835257266" lastFinishedPulling="2025-07-12 00:37:05.617808983 +0000 UTC m=+40.293015881" observedRunningTime="2025-07-12 00:37:06.551636571 +0000 UTC m=+41.226843469" watchObservedRunningTime="2025-07-12 00:37:06.551914436 +0000 UTC m=+41.227121334" Jul 12 00:37:06.568000 audit[3771]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=3771 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:06.568000 audit[3771]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe700b630 a2=0 a3=1 items=0 ppid=2213 pid=3771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:06.568000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:06.580000 audit[3771]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=3771 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:06.580000 audit[3771]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffe700b630 a2=0 a3=1 items=0 ppid=2213 pid=3771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:06.580000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:07.222547 kubelet[2103]: I0712 00:37:07.222497 2103 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:37:07.222890 kubelet[2103]: E0712 00:37:07.222867 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:37:07.254574 kernel: kauditd_printk_skb: 42 callbacks suppressed Jul 12 00:37:07.254683 kernel: audit: type=1325 audit(1752280627.247:303): table=filter:101 family=2 entries=19 op=nft_register_rule pid=3789 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:07.254708 kernel: audit: type=1300 audit(1752280627.247:303): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffd2deeef0 a2=0 a3=1 items=0 ppid=2213 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.247000 audit[3789]: NETFILTER_CFG table=filter:101 family=2 entries=19 op=nft_register_rule pid=3789 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:07.247000 audit[3789]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffd2deeef0 a2=0 a3=1 items=0 ppid=2213 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.247000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:07.258082 kernel: audit: type=1327 audit(1752280627.247:303): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:07.257000 audit[3789]: NETFILTER_CFG table=nat:102 family=2 entries=21 op=nft_register_chain pid=3789 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:07.257000 audit[3789]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7044 a0=3 a1=ffffd2deeef0 a2=0 a3=1 items=0 ppid=2213 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.264671 kernel: audit: type=1325 audit(1752280627.257:304): table=nat:102 family=2 entries=21 op=nft_register_chain pid=3789 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:07.264720 kernel: audit: type=1300 audit(1752280627.257:304): arch=c00000b7 syscall=211 success=yes exit=7044 a0=3 a1=ffffd2deeef0 a2=0 a3=1 items=0 ppid=2213 pid=3789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.257000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:07.266519 kernel: audit: type=1327 audit(1752280627.257:304): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:07.405699 env[1324]: time="2025-07-12T00:37:07.405649752Z" level=info msg="StopPodSandbox for \"da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13\"" Jul 12 00:37:07.406044 env[1324]: time="2025-07-12T00:37:07.405710117Z" level=info msg="StopPodSandbox for \"c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587\"" Jul 12 00:37:07.406044 env[1324]: time="2025-07-12T00:37:07.405657993Z" level=info msg="StopPodSandbox for \"0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc\"" Jul 12 00:37:07.533505 env[1324]: 2025-07-12 00:37:07.470 [INFO][3827] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" Jul 12 00:37:07.533505 env[1324]: 2025-07-12 00:37:07.471 [INFO][3827] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" iface="eth0" netns="/var/run/netns/cni-a7934f34-f7b3-0ba2-77ef-6cd74e05d604" Jul 12 00:37:07.533505 env[1324]: 2025-07-12 00:37:07.471 [INFO][3827] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" iface="eth0" netns="/var/run/netns/cni-a7934f34-f7b3-0ba2-77ef-6cd74e05d604" Jul 12 00:37:07.533505 env[1324]: 2025-07-12 00:37:07.471 [INFO][3827] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" iface="eth0" netns="/var/run/netns/cni-a7934f34-f7b3-0ba2-77ef-6cd74e05d604" Jul 12 00:37:07.533505 env[1324]: 2025-07-12 00:37:07.471 [INFO][3827] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" Jul 12 00:37:07.533505 env[1324]: 2025-07-12 00:37:07.471 [INFO][3827] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" Jul 12 00:37:07.533505 env[1324]: 2025-07-12 00:37:07.511 [INFO][3856] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" HandleID="k8s-pod-network.0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" Workload="localhost-k8s-calico--kube--controllers--7f46b5b9d6--92dnp-eth0" Jul 12 00:37:07.533505 env[1324]: 2025-07-12 00:37:07.511 [INFO][3856] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:37:07.533505 env[1324]: 2025-07-12 00:37:07.511 [INFO][3856] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:37:07.533505 env[1324]: 2025-07-12 00:37:07.524 [WARNING][3856] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" HandleID="k8s-pod-network.0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" Workload="localhost-k8s-calico--kube--controllers--7f46b5b9d6--92dnp-eth0" Jul 12 00:37:07.533505 env[1324]: 2025-07-12 00:37:07.524 [INFO][3856] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" HandleID="k8s-pod-network.0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" Workload="localhost-k8s-calico--kube--controllers--7f46b5b9d6--92dnp-eth0" Jul 12 00:37:07.533505 env[1324]: 2025-07-12 00:37:07.526 [INFO][3856] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:37:07.533505 env[1324]: 2025-07-12 00:37:07.528 [INFO][3827] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" Jul 12 00:37:07.533505 env[1324]: time="2025-07-12T00:37:07.532166036Z" level=info msg="TearDown network for sandbox \"0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc\" successfully" Jul 12 00:37:07.533505 env[1324]: time="2025-07-12T00:37:07.532204479Z" level=info msg="StopPodSandbox for \"0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc\" returns successfully" Jul 12 00:37:07.534410 systemd[1]: run-netns-cni\x2da7934f34\x2df7b3\x2d0ba2\x2d77ef\x2d6cd74e05d604.mount: Deactivated successfully. Jul 12 00:37:07.534965 env[1324]: time="2025-07-12T00:37:07.534899395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f46b5b9d6-92dnp,Uid:8eb4ca6c-aaff-4fe8-9f0e-771c33b4200d,Namespace:calico-system,Attempt:1,}" Jul 12 00:37:07.536000 audit[3888]: AVC avc: denied { bpf } for pid=3888 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.536000 audit[3888]: AVC avc: denied { bpf } for pid=3888 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.543537 kernel: audit: type=1400 audit(1752280627.536:305): avc: denied { bpf } for pid=3888 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.543623 kernel: audit: type=1400 audit(1752280627.536:305): avc: denied { bpf } for pid=3888 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.543642 kernel: audit: type=1400 audit(1752280627.536:305): avc: denied { perfmon } for pid=3888 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.536000 audit[3888]: AVC avc: denied { perfmon } for pid=3888 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.543991 kubelet[2103]: E0712 00:37:07.543950 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:37:07.536000 audit[3888]: AVC avc: denied { perfmon } for pid=3888 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.549224 kernel: audit: type=1400 audit(1752280627.536:305): avc: denied { perfmon } for pid=3888 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.536000 audit[3888]: AVC avc: denied { perfmon } for pid=3888 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.536000 audit[3888]: AVC avc: denied { perfmon } for pid=3888 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.536000 audit[3888]: AVC avc: denied { perfmon } for pid=3888 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.536000 audit[3888]: AVC avc: denied { bpf } for pid=3888 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.536000 audit[3888]: AVC avc: denied { bpf } for pid=3888 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.536000 audit: BPF prog-id=10 op=LOAD Jul 12 00:37:07.536000 audit[3888]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc75adf88 a2=98 a3=ffffc75adf78 items=0 ppid=3790 pid=3888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.536000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 12 00:37:07.536000 audit: BPF prog-id=10 op=UNLOAD Jul 12 00:37:07.536000 audit[3888]: AVC avc: denied { bpf } for pid=3888 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.536000 audit[3888]: AVC avc: denied { bpf } for pid=3888 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.536000 audit[3888]: AVC avc: denied { perfmon } for pid=3888 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.536000 audit[3888]: AVC avc: denied { perfmon } for pid=3888 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.536000 audit[3888]: AVC avc: denied { perfmon } for pid=3888 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.536000 audit[3888]: AVC avc: denied { perfmon } for pid=3888 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.536000 audit[3888]: AVC avc: denied { perfmon } for pid=3888 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.536000 audit[3888]: AVC avc: denied { bpf } for pid=3888 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.536000 audit[3888]: AVC avc: denied { bpf } for pid=3888 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.536000 audit: BPF prog-id=11 op=LOAD Jul 12 00:37:07.536000 audit[3888]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc75ade38 a2=74 a3=95 items=0 ppid=3790 pid=3888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.536000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 12 00:37:07.537000 audit: BPF prog-id=11 op=UNLOAD Jul 12 00:37:07.537000 audit[3888]: AVC avc: denied { bpf } for pid=3888 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.537000 audit[3888]: AVC avc: denied { bpf } for pid=3888 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.537000 audit[3888]: AVC avc: denied { perfmon } for pid=3888 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.537000 audit[3888]: AVC avc: denied { perfmon } for pid=3888 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.537000 audit[3888]: AVC avc: denied { perfmon } for pid=3888 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.537000 audit[3888]: AVC avc: denied { perfmon } for pid=3888 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.537000 audit[3888]: AVC avc: denied { perfmon } for pid=3888 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.537000 audit[3888]: AVC avc: denied { bpf } for pid=3888 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.537000 audit[3888]: AVC avc: denied { bpf } for pid=3888 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.537000 audit: BPF prog-id=12 op=LOAD Jul 12 00:37:07.537000 audit[3888]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc75ade68 a2=40 a3=ffffc75ade98 items=0 ppid=3790 pid=3888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.537000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 12 00:37:07.537000 audit: BPF prog-id=12 op=UNLOAD Jul 12 00:37:07.537000 audit[3888]: AVC avc: denied { perfmon } for pid=3888 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.537000 audit[3888]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=0 a1=ffffc75adf80 a2=50 a3=0 items=0 ppid=3790 pid=3888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.537000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 12 00:37:07.544000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.544000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.544000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.544000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.544000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.544000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.544000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.544000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.544000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.544000 audit: BPF prog-id=13 op=LOAD Jul 12 00:37:07.544000 audit[3890]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffecf9b168 a2=98 a3=ffffecf9b158 items=0 ppid=3790 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.544000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:37:07.545000 audit: BPF prog-id=13 op=UNLOAD Jul 12 00:37:07.545000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.545000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.545000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.545000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.545000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.545000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.545000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.545000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.545000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.545000 audit[3890]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffecf9adf8 a2=74 a3=95 items=0 ppid=3790 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.545000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:37:07.550000 audit: BPF prog-id=14 op=UNLOAD Jul 12 00:37:07.550000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.550000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.550000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.550000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.550000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.550000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.550000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.550000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.550000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.550000 audit: BPF prog-id=15 op=LOAD Jul 12 00:37:07.550000 audit[3890]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffecf9ae58 a2=94 a3=2 items=0 ppid=3790 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.550000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:37:07.551000 audit: BPF prog-id=15 op=UNLOAD Jul 12 00:37:07.564497 env[1324]: 2025-07-12 00:37:07.497 [INFO][3826] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" Jul 12 00:37:07.564497 env[1324]: 2025-07-12 00:37:07.497 [INFO][3826] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" iface="eth0" netns="/var/run/netns/cni-05c3b8e0-569e-360d-9d55-6dd4ca8f5cde" Jul 12 00:37:07.564497 env[1324]: 2025-07-12 00:37:07.497 [INFO][3826] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" iface="eth0" netns="/var/run/netns/cni-05c3b8e0-569e-360d-9d55-6dd4ca8f5cde" Jul 12 00:37:07.564497 env[1324]: 2025-07-12 00:37:07.499 [INFO][3826] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" iface="eth0" netns="/var/run/netns/cni-05c3b8e0-569e-360d-9d55-6dd4ca8f5cde" Jul 12 00:37:07.564497 env[1324]: 2025-07-12 00:37:07.499 [INFO][3826] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" Jul 12 00:37:07.564497 env[1324]: 2025-07-12 00:37:07.499 [INFO][3826] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" Jul 12 00:37:07.564497 env[1324]: 2025-07-12 00:37:07.533 [INFO][3873] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" HandleID="k8s-pod-network.c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" Workload="localhost-k8s-coredns--7c65d6cfc9--nkzk8-eth0" Jul 12 00:37:07.564497 env[1324]: 2025-07-12 00:37:07.533 [INFO][3873] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:37:07.564497 env[1324]: 2025-07-12 00:37:07.533 [INFO][3873] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:37:07.564497 env[1324]: 2025-07-12 00:37:07.551 [WARNING][3873] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" HandleID="k8s-pod-network.c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" Workload="localhost-k8s-coredns--7c65d6cfc9--nkzk8-eth0" Jul 12 00:37:07.564497 env[1324]: 2025-07-12 00:37:07.552 [INFO][3873] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" HandleID="k8s-pod-network.c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" Workload="localhost-k8s-coredns--7c65d6cfc9--nkzk8-eth0" Jul 12 00:37:07.564497 env[1324]: 2025-07-12 00:37:07.554 [INFO][3873] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:37:07.564497 env[1324]: 2025-07-12 00:37:07.562 [INFO][3826] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" Jul 12 00:37:07.565119 env[1324]: time="2025-07-12T00:37:07.565084070Z" level=info msg="TearDown network for sandbox \"c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587\" successfully" Jul 12 00:37:07.565194 env[1324]: time="2025-07-12T00:37:07.565177758Z" level=info msg="StopPodSandbox for \"c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587\" returns successfully" Jul 12 00:37:07.568505 systemd[1]: run-netns-cni\x2d05c3b8e0\x2d569e\x2d360d\x2d9d55\x2d6dd4ca8f5cde.mount: Deactivated successfully. Jul 12 00:37:07.577925 kubelet[2103]: E0712 00:37:07.577889 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:37:07.578831 env[1324]: time="2025-07-12T00:37:07.578782305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nkzk8,Uid:3888f770-0a64-4382-86fc-ba4105786dc9,Namespace:kube-system,Attempt:1,}" Jul 12 00:37:07.604206 env[1324]: 2025-07-12 00:37:07.493 [INFO][3842] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" Jul 12 00:37:07.604206 env[1324]: 2025-07-12 00:37:07.493 [INFO][3842] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" iface="eth0" netns="/var/run/netns/cni-9860a2b6-ddfe-e644-a928-51a430134d5b" Jul 12 00:37:07.604206 env[1324]: 2025-07-12 00:37:07.493 [INFO][3842] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" iface="eth0" netns="/var/run/netns/cni-9860a2b6-ddfe-e644-a928-51a430134d5b" Jul 12 00:37:07.604206 env[1324]: 2025-07-12 00:37:07.494 [INFO][3842] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" iface="eth0" netns="/var/run/netns/cni-9860a2b6-ddfe-e644-a928-51a430134d5b" Jul 12 00:37:07.604206 env[1324]: 2025-07-12 00:37:07.494 [INFO][3842] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" Jul 12 00:37:07.604206 env[1324]: 2025-07-12 00:37:07.494 [INFO][3842] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" Jul 12 00:37:07.604206 env[1324]: 2025-07-12 00:37:07.540 [INFO][3865] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" HandleID="k8s-pod-network.da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" Workload="localhost-k8s-calico--apiserver--7d9dcbc845--hgq6q-eth0" Jul 12 00:37:07.604206 env[1324]: 2025-07-12 00:37:07.540 [INFO][3865] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:37:07.604206 env[1324]: 2025-07-12 00:37:07.554 [INFO][3865] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:37:07.604206 env[1324]: 2025-07-12 00:37:07.587 [WARNING][3865] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" HandleID="k8s-pod-network.da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" Workload="localhost-k8s-calico--apiserver--7d9dcbc845--hgq6q-eth0" Jul 12 00:37:07.604206 env[1324]: 2025-07-12 00:37:07.587 [INFO][3865] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" HandleID="k8s-pod-network.da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" Workload="localhost-k8s-calico--apiserver--7d9dcbc845--hgq6q-eth0" Jul 12 00:37:07.604206 env[1324]: 2025-07-12 00:37:07.590 [INFO][3865] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:37:07.604206 env[1324]: 2025-07-12 00:37:07.592 [INFO][3842] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" Jul 12 00:37:07.604673 env[1324]: time="2025-07-12T00:37:07.604363058Z" level=info msg="TearDown network for sandbox \"da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13\" successfully" Jul 12 00:37:07.604673 env[1324]: time="2025-07-12T00:37:07.604404302Z" level=info msg="StopPodSandbox for \"da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13\" returns successfully" Jul 12 00:37:07.605048 env[1324]: time="2025-07-12T00:37:07.605018036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d9dcbc845-hgq6q,Uid:faaca0cb-1548-4256-82f4-00433e531079,Namespace:calico-apiserver,Attempt:1,}" Jul 12 00:37:07.644663 kubelet[2103]: I0712 00:37:07.644627 2103 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:37:07.671000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.671000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.671000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.671000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.671000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.671000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.671000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.671000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.671000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.671000 audit: BPF prog-id=16 op=LOAD Jul 12 00:37:07.671000 audit[3890]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffecf9ae18 a2=40 a3=ffffecf9ae48 items=0 ppid=3790 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.671000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:37:07.671000 audit: BPF prog-id=16 op=UNLOAD Jul 12 00:37:07.671000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.671000 audit[3890]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffecf9af30 a2=50 a3=0 items=0 ppid=3790 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.671000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit[3890]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffecf9ae88 a2=28 a3=ffffecf9afb8 items=0 ppid=3790 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit[3890]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffecf9aeb8 a2=28 a3=ffffecf9afe8 items=0 ppid=3790 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit[3890]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffecf9ad68 a2=28 a3=ffffecf9ae98 items=0 ppid=3790 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit[3890]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffecf9aed8 a2=28 a3=ffffecf9b008 items=0 ppid=3790 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit[3890]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffecf9aeb8 a2=28 a3=ffffecf9afe8 items=0 ppid=3790 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit[3890]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffecf9aea8 a2=28 a3=ffffecf9afd8 items=0 ppid=3790 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit[3890]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffecf9aed8 a2=28 a3=ffffecf9b008 items=0 ppid=3790 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit[3890]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffecf9aeb8 a2=28 a3=ffffecf9afe8 items=0 ppid=3790 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit[3890]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffecf9aed8 a2=28 a3=ffffecf9b008 items=0 ppid=3790 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit[3890]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffecf9aea8 a2=28 a3=ffffecf9afd8 items=0 ppid=3790 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit[3890]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffecf9af28 a2=28 a3=ffffecf9b068 items=0 ppid=3790 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit[3890]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffecf9ac60 a2=50 a3=0 items=0 ppid=3790 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit: BPF prog-id=17 op=LOAD Jul 12 00:37:07.692000 audit[3890]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffecf9ac68 a2=94 a3=5 items=0 ppid=3790 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:37:07.692000 audit: BPF prog-id=17 op=UNLOAD Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit[3890]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffecf9ad70 a2=50 a3=0 items=0 ppid=3790 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit[3890]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffecf9aeb8 a2=4 a3=3 items=0 ppid=3790 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.692000 audit[3890]: AVC avc: denied { confidentiality } for pid=3890 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 12 00:37:07.692000 audit[3890]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffecf9ae98 a2=94 a3=6 items=0 ppid=3790 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:37:07.693000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.693000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.693000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.693000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.693000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.693000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.693000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.693000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.693000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.693000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.693000 audit[3890]: AVC avc: denied { confidentiality } for pid=3890 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 12 00:37:07.693000 audit[3890]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffecf9a668 a2=94 a3=83 items=0 ppid=3790 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.693000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:37:07.693000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.693000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.693000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.693000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.693000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.693000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.693000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.693000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.693000 audit[3890]: AVC avc: denied { perfmon } for pid=3890 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.693000 audit[3890]: AVC avc: denied { bpf } for pid=3890 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.693000 audit[3890]: AVC avc: denied { confidentiality } for pid=3890 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 12 00:37:07.693000 audit[3890]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffecf9a668 a2=94 a3=83 items=0 ppid=3790 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.693000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 12 00:37:07.715000 audit[3978]: AVC avc: denied { bpf } for pid=3978 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.715000 audit[3978]: AVC avc: denied { bpf } for pid=3978 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.715000 audit[3978]: AVC avc: denied { perfmon } for pid=3978 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.715000 audit[3978]: AVC avc: denied { perfmon } for pid=3978 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.715000 audit[3978]: AVC avc: denied { perfmon } for pid=3978 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.715000 audit[3978]: AVC avc: denied { perfmon } for pid=3978 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.715000 audit[3978]: AVC avc: denied { perfmon } for pid=3978 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.715000 audit[3978]: AVC avc: denied { bpf } for pid=3978 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.715000 audit[3978]: AVC avc: denied { bpf } for pid=3978 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.715000 audit: BPF prog-id=18 op=LOAD Jul 12 00:37:07.715000 audit[3978]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd6315ab8 a2=98 a3=ffffd6315aa8 items=0 ppid=3790 pid=3978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.715000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 12 00:37:07.715000 audit: BPF prog-id=18 op=UNLOAD Jul 12 00:37:07.715000 audit[3978]: AVC avc: denied { bpf } for pid=3978 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.715000 audit[3978]: AVC avc: denied { bpf } for pid=3978 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.715000 audit[3978]: AVC avc: denied { perfmon } for pid=3978 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.715000 audit[3978]: AVC avc: denied { perfmon } for pid=3978 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.715000 audit[3978]: AVC avc: denied { perfmon } for pid=3978 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.715000 audit[3978]: AVC avc: denied { perfmon } for pid=3978 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.715000 audit[3978]: AVC avc: denied { perfmon } for pid=3978 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.715000 audit[3978]: AVC avc: denied { bpf } for pid=3978 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.715000 audit[3978]: AVC avc: denied { bpf } for pid=3978 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.715000 audit: BPF prog-id=19 op=LOAD Jul 12 00:37:07.715000 audit[3978]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd6315968 a2=74 a3=95 items=0 ppid=3790 pid=3978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.715000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 12 00:37:07.715000 audit: BPF prog-id=19 op=UNLOAD Jul 12 00:37:07.715000 audit[3978]: AVC avc: denied { bpf } for pid=3978 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.715000 audit[3978]: AVC avc: denied { bpf } for pid=3978 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.715000 audit[3978]: AVC avc: denied { perfmon } for pid=3978 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.715000 audit[3978]: AVC avc: denied { perfmon } for pid=3978 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.715000 audit[3978]: AVC avc: denied { perfmon } for pid=3978 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.715000 audit[3978]: AVC avc: denied { perfmon } for pid=3978 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.715000 audit[3978]: AVC avc: denied { perfmon } for pid=3978 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.715000 audit[3978]: AVC avc: denied { bpf } for pid=3978 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.715000 audit[3978]: AVC avc: denied { bpf } for pid=3978 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.715000 audit: BPF prog-id=20 op=LOAD Jul 12 00:37:07.715000 audit[3978]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd6315998 a2=40 a3=ffffd63159c8 items=0 ppid=3790 pid=3978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.715000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 12 00:37:07.715000 audit: BPF prog-id=20 op=UNLOAD Jul 12 00:37:07.829228 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 12 00:37:07.829502 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calife389d88381: link becomes ready Jul 12 00:37:07.826624 systemd-networkd[1098]: calife389d88381: Link UP Jul 12 00:37:07.832007 systemd-networkd[1098]: calife389d88381: Gained carrier Jul 12 00:37:07.851578 env[1324]: 2025-07-12 00:37:07.645 [INFO][3891] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7f46b5b9d6--92dnp-eth0 calico-kube-controllers-7f46b5b9d6- calico-system 8eb4ca6c-aaff-4fe8-9f0e-771c33b4200d 954 0 2025-07-12 00:36:47 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f46b5b9d6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7f46b5b9d6-92dnp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calife389d88381 [] [] }} ContainerID="515cd1ebaa213931794fb390e47af7448adb46df8bca52119c4de102ff5d30a7" Namespace="calico-system" Pod="calico-kube-controllers-7f46b5b9d6-92dnp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f46b5b9d6--92dnp-" Jul 12 00:37:07.851578 env[1324]: 2025-07-12 00:37:07.645 [INFO][3891] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="515cd1ebaa213931794fb390e47af7448adb46df8bca52119c4de102ff5d30a7" Namespace="calico-system" Pod="calico-kube-controllers-7f46b5b9d6-92dnp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f46b5b9d6--92dnp-eth0" Jul 12 00:37:07.851578 env[1324]: 2025-07-12 00:37:07.748 [INFO][3949] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="515cd1ebaa213931794fb390e47af7448adb46df8bca52119c4de102ff5d30a7" HandleID="k8s-pod-network.515cd1ebaa213931794fb390e47af7448adb46df8bca52119c4de102ff5d30a7" Workload="localhost-k8s-calico--kube--controllers--7f46b5b9d6--92dnp-eth0" Jul 12 00:37:07.851578 env[1324]: 2025-07-12 00:37:07.756 [INFO][3949] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="515cd1ebaa213931794fb390e47af7448adb46df8bca52119c4de102ff5d30a7" HandleID="k8s-pod-network.515cd1ebaa213931794fb390e47af7448adb46df8bca52119c4de102ff5d30a7" Workload="localhost-k8s-calico--kube--controllers--7f46b5b9d6--92dnp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3010), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7f46b5b9d6-92dnp", "timestamp":"2025-07-12 00:37:07.748763384 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:37:07.851578 env[1324]: 2025-07-12 00:37:07.756 [INFO][3949] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:37:07.851578 env[1324]: 2025-07-12 00:37:07.756 [INFO][3949] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:37:07.851578 env[1324]: 2025-07-12 00:37:07.756 [INFO][3949] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:37:07.851578 env[1324]: 2025-07-12 00:37:07.772 [INFO][3949] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.515cd1ebaa213931794fb390e47af7448adb46df8bca52119c4de102ff5d30a7" host="localhost" Jul 12 00:37:07.851578 env[1324]: 2025-07-12 00:37:07.785 [INFO][3949] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:37:07.851578 env[1324]: 2025-07-12 00:37:07.793 [INFO][3949] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:37:07.851578 env[1324]: 2025-07-12 00:37:07.799 [INFO][3949] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:37:07.851578 env[1324]: 2025-07-12 00:37:07.802 [INFO][3949] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:37:07.851578 env[1324]: 2025-07-12 00:37:07.802 [INFO][3949] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.515cd1ebaa213931794fb390e47af7448adb46df8bca52119c4de102ff5d30a7" host="localhost" Jul 12 00:37:07.851578 env[1324]: 2025-07-12 00:37:07.803 [INFO][3949] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.515cd1ebaa213931794fb390e47af7448adb46df8bca52119c4de102ff5d30a7 Jul 12 00:37:07.851578 env[1324]: 2025-07-12 00:37:07.808 [INFO][3949] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.515cd1ebaa213931794fb390e47af7448adb46df8bca52119c4de102ff5d30a7" host="localhost" Jul 12 00:37:07.851578 env[1324]: 2025-07-12 00:37:07.812 [INFO][3949] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.515cd1ebaa213931794fb390e47af7448adb46df8bca52119c4de102ff5d30a7" host="localhost" Jul 12 00:37:07.851578 env[1324]: 2025-07-12 00:37:07.812 [INFO][3949] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.515cd1ebaa213931794fb390e47af7448adb46df8bca52119c4de102ff5d30a7" host="localhost" Jul 12 00:37:07.851578 env[1324]: 2025-07-12 00:37:07.812 [INFO][3949] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:37:07.851578 env[1324]: 2025-07-12 00:37:07.812 [INFO][3949] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="515cd1ebaa213931794fb390e47af7448adb46df8bca52119c4de102ff5d30a7" HandleID="k8s-pod-network.515cd1ebaa213931794fb390e47af7448adb46df8bca52119c4de102ff5d30a7" Workload="localhost-k8s-calico--kube--controllers--7f46b5b9d6--92dnp-eth0" Jul 12 00:37:07.852182 env[1324]: 2025-07-12 00:37:07.823 [INFO][3891] cni-plugin/k8s.go 418: Populated endpoint ContainerID="515cd1ebaa213931794fb390e47af7448adb46df8bca52119c4de102ff5d30a7" Namespace="calico-system" Pod="calico-kube-controllers-7f46b5b9d6-92dnp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f46b5b9d6--92dnp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f46b5b9d6--92dnp-eth0", GenerateName:"calico-kube-controllers-7f46b5b9d6-", Namespace:"calico-system", SelfLink:"", UID:"8eb4ca6c-aaff-4fe8-9f0e-771c33b4200d", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 36, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f46b5b9d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7f46b5b9d6-92dnp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calife389d88381", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:37:07.852182 env[1324]: 2025-07-12 00:37:07.823 [INFO][3891] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="515cd1ebaa213931794fb390e47af7448adb46df8bca52119c4de102ff5d30a7" Namespace="calico-system" Pod="calico-kube-controllers-7f46b5b9d6-92dnp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f46b5b9d6--92dnp-eth0" Jul 12 00:37:07.852182 env[1324]: 2025-07-12 00:37:07.823 [INFO][3891] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calife389d88381 ContainerID="515cd1ebaa213931794fb390e47af7448adb46df8bca52119c4de102ff5d30a7" Namespace="calico-system" Pod="calico-kube-controllers-7f46b5b9d6-92dnp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f46b5b9d6--92dnp-eth0" Jul 12 00:37:07.852182 env[1324]: 2025-07-12 00:37:07.840 [INFO][3891] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="515cd1ebaa213931794fb390e47af7448adb46df8bca52119c4de102ff5d30a7" Namespace="calico-system" Pod="calico-kube-controllers-7f46b5b9d6-92dnp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f46b5b9d6--92dnp-eth0" Jul 12 00:37:07.852182 env[1324]: 2025-07-12 00:37:07.840 [INFO][3891] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="515cd1ebaa213931794fb390e47af7448adb46df8bca52119c4de102ff5d30a7" Namespace="calico-system" Pod="calico-kube-controllers-7f46b5b9d6-92dnp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f46b5b9d6--92dnp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f46b5b9d6--92dnp-eth0", GenerateName:"calico-kube-controllers-7f46b5b9d6-", Namespace:"calico-system", SelfLink:"", UID:"8eb4ca6c-aaff-4fe8-9f0e-771c33b4200d", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 36, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f46b5b9d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"515cd1ebaa213931794fb390e47af7448adb46df8bca52119c4de102ff5d30a7", Pod:"calico-kube-controllers-7f46b5b9d6-92dnp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calife389d88381", MAC:"2a:bf:93:69:e6:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:37:07.852182 env[1324]: 2025-07-12 00:37:07.849 [INFO][3891] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="515cd1ebaa213931794fb390e47af7448adb46df8bca52119c4de102ff5d30a7" Namespace="calico-system" Pod="calico-kube-controllers-7f46b5b9d6-92dnp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f46b5b9d6--92dnp-eth0" Jul 12 00:37:07.872946 env[1324]: time="2025-07-12T00:37:07.872880379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:37:07.873096 env[1324]: time="2025-07-12T00:37:07.872923782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:37:07.873096 env[1324]: time="2025-07-12T00:37:07.872948505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:37:07.873688 env[1324]: time="2025-07-12T00:37:07.873348059Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/515cd1ebaa213931794fb390e47af7448adb46df8bca52119c4de102ff5d30a7 pid=4031 runtime=io.containerd.runc.v2 Jul 12 00:37:07.918280 systemd-networkd[1098]: vxlan.calico: Link UP Jul 12 00:37:07.918286 systemd-networkd[1098]: vxlan.calico: Gained carrier Jul 12 00:37:07.957827 systemd-resolved[1238]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:37:07.964304 systemd-networkd[1098]: calia47a7df3e05: Link UP Jul 12 00:37:07.966062 systemd-networkd[1098]: calia47a7df3e05: Gained carrier Jul 12 00:37:07.966405 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia47a7df3e05: link becomes ready Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit: BPF prog-id=21 op=LOAD Jul 12 00:37:07.970000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffed830f98 a2=98 a3=ffffed830f88 items=0 ppid=3790 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.970000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:37:07.970000 audit: BPF prog-id=21 op=UNLOAD Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit: BPF prog-id=22 op=LOAD Jul 12 00:37:07.970000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffed830c78 a2=74 a3=95 items=0 ppid=3790 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.970000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:37:07.970000 audit: BPF prog-id=22 op=UNLOAD Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit: BPF prog-id=23 op=LOAD Jul 12 00:37:07.970000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffed830cd8 a2=94 a3=2 items=0 ppid=3790 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.970000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:37:07.970000 audit: BPF prog-id=23 op=UNLOAD Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffed830d08 a2=28 a3=ffffed830e38 items=0 ppid=3790 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.970000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffed830d38 a2=28 a3=ffffed830e68 items=0 ppid=3790 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.970000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffed830be8 a2=28 a3=ffffed830d18 items=0 ppid=3790 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.970000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffed830d58 a2=28 a3=ffffed830e88 items=0 ppid=3790 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.970000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffed830d38 a2=28 a3=ffffed830e68 items=0 ppid=3790 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.970000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffed830d28 a2=28 a3=ffffed830e58 items=0 ppid=3790 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.970000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffed830d58 a2=28 a3=ffffed830e88 items=0 ppid=3790 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.970000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffed830d38 a2=28 a3=ffffed830e68 items=0 ppid=3790 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.970000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffed830d58 a2=28 a3=ffffed830e88 items=0 ppid=3790 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.970000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffed830d28 a2=28 a3=ffffed830e58 items=0 ppid=3790 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.970000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffed830da8 a2=28 a3=ffffed830ee8 items=0 ppid=3790 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.970000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.970000 audit: BPF prog-id=24 op=LOAD Jul 12 00:37:07.970000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffed830bc8 a2=40 a3=ffffed830bf8 items=0 ppid=3790 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.970000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:37:07.970000 audit: BPF prog-id=24 op=UNLOAD Jul 12 00:37:07.972000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.972000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=0 a1=ffffed830bf0 a2=50 a3=0 items=0 ppid=3790 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.972000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:37:07.972000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.972000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=0 a1=ffffed830bf0 a2=50 a3=0 items=0 ppid=3790 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.972000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:37:07.972000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.972000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.972000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.972000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.972000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.972000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.972000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.972000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.972000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.972000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.972000 audit: BPF prog-id=25 op=LOAD Jul 12 00:37:07.972000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffed830358 a2=94 a3=2 items=0 ppid=3790 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.972000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:37:07.972000 audit: BPF prog-id=25 op=UNLOAD Jul 12 00:37:07.972000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.972000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.972000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.972000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.972000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.972000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.972000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.972000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.972000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.972000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.972000 audit: BPF prog-id=26 op=LOAD Jul 12 00:37:07.972000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffed8304e8 a2=94 a3=30 items=0 ppid=3790 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.972000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 12 00:37:07.985096 env[1324]: 2025-07-12 00:37:07.727 [INFO][3925] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7d9dcbc845--hgq6q-eth0 calico-apiserver-7d9dcbc845- calico-apiserver faaca0cb-1548-4256-82f4-00433e531079 955 0 2025-07-12 00:36:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d9dcbc845 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7d9dcbc845-hgq6q eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia47a7df3e05 [] [] }} ContainerID="d42fec67b342518352f397bb63d3e583ac00357e203617d6bdaef25410339430" Namespace="calico-apiserver" Pod="calico-apiserver-7d9dcbc845-hgq6q" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d9dcbc845--hgq6q-" Jul 12 00:37:07.985096 env[1324]: 2025-07-12 00:37:07.727 [INFO][3925] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d42fec67b342518352f397bb63d3e583ac00357e203617d6bdaef25410339430" Namespace="calico-apiserver" Pod="calico-apiserver-7d9dcbc845-hgq6q" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d9dcbc845--hgq6q-eth0" Jul 12 00:37:07.985096 env[1324]: 2025-07-12 00:37:07.804 [INFO][3992] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d42fec67b342518352f397bb63d3e583ac00357e203617d6bdaef25410339430" HandleID="k8s-pod-network.d42fec67b342518352f397bb63d3e583ac00357e203617d6bdaef25410339430" Workload="localhost-k8s-calico--apiserver--7d9dcbc845--hgq6q-eth0" Jul 12 00:37:07.985096 env[1324]: 2025-07-12 00:37:07.804 [INFO][3992] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d42fec67b342518352f397bb63d3e583ac00357e203617d6bdaef25410339430" HandleID="k8s-pod-network.d42fec67b342518352f397bb63d3e583ac00357e203617d6bdaef25410339430" Workload="localhost-k8s-calico--apiserver--7d9dcbc845--hgq6q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7d9dcbc845-hgq6q", "timestamp":"2025-07-12 00:37:07.804755672 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:37:07.985096 env[1324]: 2025-07-12 00:37:07.805 [INFO][3992] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:37:07.985096 env[1324]: 2025-07-12 00:37:07.812 [INFO][3992] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:37:07.985096 env[1324]: 2025-07-12 00:37:07.813 [INFO][3992] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:37:07.985096 env[1324]: 2025-07-12 00:37:07.885 [INFO][3992] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d42fec67b342518352f397bb63d3e583ac00357e203617d6bdaef25410339430" host="localhost" Jul 12 00:37:07.985096 env[1324]: 2025-07-12 00:37:07.930 [INFO][3992] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:37:07.985096 env[1324]: 2025-07-12 00:37:07.936 [INFO][3992] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:37:07.985096 env[1324]: 2025-07-12 00:37:07.938 [INFO][3992] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:37:07.985096 env[1324]: 2025-07-12 00:37:07.941 [INFO][3992] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:37:07.985096 env[1324]: 2025-07-12 00:37:07.941 [INFO][3992] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d42fec67b342518352f397bb63d3e583ac00357e203617d6bdaef25410339430" host="localhost" Jul 12 00:37:07.985096 env[1324]: 2025-07-12 00:37:07.944 [INFO][3992] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d42fec67b342518352f397bb63d3e583ac00357e203617d6bdaef25410339430 Jul 12 00:37:07.985096 env[1324]: 2025-07-12 00:37:07.947 [INFO][3992] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d42fec67b342518352f397bb63d3e583ac00357e203617d6bdaef25410339430" host="localhost" Jul 12 00:37:07.985096 env[1324]: 2025-07-12 00:37:07.957 [INFO][3992] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.d42fec67b342518352f397bb63d3e583ac00357e203617d6bdaef25410339430" host="localhost" Jul 12 00:37:07.985096 env[1324]: 2025-07-12 00:37:07.957 [INFO][3992] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.d42fec67b342518352f397bb63d3e583ac00357e203617d6bdaef25410339430" host="localhost" Jul 12 00:37:07.985096 env[1324]: 2025-07-12 00:37:07.957 [INFO][3992] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:37:07.985096 env[1324]: 2025-07-12 00:37:07.957 [INFO][3992] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="d42fec67b342518352f397bb63d3e583ac00357e203617d6bdaef25410339430" HandleID="k8s-pod-network.d42fec67b342518352f397bb63d3e583ac00357e203617d6bdaef25410339430" Workload="localhost-k8s-calico--apiserver--7d9dcbc845--hgq6q-eth0" Jul 12 00:37:07.985896 env[1324]: 2025-07-12 00:37:07.961 [INFO][3925] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d42fec67b342518352f397bb63d3e583ac00357e203617d6bdaef25410339430" Namespace="calico-apiserver" Pod="calico-apiserver-7d9dcbc845-hgq6q" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d9dcbc845--hgq6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d9dcbc845--hgq6q-eth0", GenerateName:"calico-apiserver-7d9dcbc845-", Namespace:"calico-apiserver", SelfLink:"", UID:"faaca0cb-1548-4256-82f4-00433e531079", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 36, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d9dcbc845", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7d9dcbc845-hgq6q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia47a7df3e05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:37:07.985896 env[1324]: 2025-07-12 00:37:07.961 [INFO][3925] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="d42fec67b342518352f397bb63d3e583ac00357e203617d6bdaef25410339430" Namespace="calico-apiserver" Pod="calico-apiserver-7d9dcbc845-hgq6q" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d9dcbc845--hgq6q-eth0" Jul 12 00:37:07.985896 env[1324]: 2025-07-12 00:37:07.961 [INFO][3925] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia47a7df3e05 ContainerID="d42fec67b342518352f397bb63d3e583ac00357e203617d6bdaef25410339430" Namespace="calico-apiserver" Pod="calico-apiserver-7d9dcbc845-hgq6q" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d9dcbc845--hgq6q-eth0" Jul 12 00:37:07.985896 env[1324]: 2025-07-12 00:37:07.967 [INFO][3925] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d42fec67b342518352f397bb63d3e583ac00357e203617d6bdaef25410339430" Namespace="calico-apiserver" Pod="calico-apiserver-7d9dcbc845-hgq6q" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d9dcbc845--hgq6q-eth0" Jul 12 00:37:07.985896 env[1324]: 2025-07-12 00:37:07.967 [INFO][3925] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d42fec67b342518352f397bb63d3e583ac00357e203617d6bdaef25410339430" Namespace="calico-apiserver" Pod="calico-apiserver-7d9dcbc845-hgq6q" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d9dcbc845--hgq6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d9dcbc845--hgq6q-eth0", GenerateName:"calico-apiserver-7d9dcbc845-", Namespace:"calico-apiserver", SelfLink:"", UID:"faaca0cb-1548-4256-82f4-00433e531079", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 36, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d9dcbc845", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d42fec67b342518352f397bb63d3e583ac00357e203617d6bdaef25410339430", Pod:"calico-apiserver-7d9dcbc845-hgq6q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia47a7df3e05", MAC:"aa:ab:3b:ca:99:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:37:07.985896 env[1324]: 2025-07-12 00:37:07.978 [INFO][3925] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d42fec67b342518352f397bb63d3e583ac00357e203617d6bdaef25410339430" Namespace="calico-apiserver" Pod="calico-apiserver-7d9dcbc845-hgq6q" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d9dcbc845--hgq6q-eth0" Jul 12 00:37:07.986000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.986000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.986000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.986000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.986000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.986000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.986000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.986000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.986000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.986000 audit: BPF prog-id=27 op=LOAD Jul 12 00:37:07.986000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffb9a9af8 a2=98 a3=fffffb9a9ae8 items=0 ppid=3790 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.986000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:37:07.986000 audit: BPF prog-id=27 op=UNLOAD Jul 12 00:37:07.986000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.986000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.986000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.986000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.986000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.986000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.986000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.986000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.986000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.986000 audit: BPF prog-id=28 op=LOAD Jul 12 00:37:07.986000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffb9a9788 a2=74 a3=95 items=0 ppid=3790 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.986000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:37:07.986000 audit: BPF prog-id=28 op=UNLOAD Jul 12 00:37:07.986000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.986000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.986000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.986000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.986000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.986000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.986000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.986000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.986000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:07.986000 audit: BPF prog-id=29 op=LOAD Jul 12 00:37:07.986000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffb9a97e8 a2=94 a3=2 items=0 ppid=3790 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:07.986000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:37:07.986000 audit: BPF prog-id=29 op=UNLOAD Jul 12 00:37:08.006007 env[1324]: time="2025-07-12T00:37:08.005874418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f46b5b9d6-92dnp,Uid:8eb4ca6c-aaff-4fe8-9f0e-771c33b4200d,Namespace:calico-system,Attempt:1,} returns sandbox id \"515cd1ebaa213931794fb390e47af7448adb46df8bca52119c4de102ff5d30a7\"" Jul 12 00:37:08.008117 env[1324]: time="2025-07-12T00:37:08.008088207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 12 00:37:08.018317 env[1324]: time="2025-07-12T00:37:08.018249791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:37:08.018476 env[1324]: time="2025-07-12T00:37:08.018296395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:37:08.018476 env[1324]: time="2025-07-12T00:37:08.018306956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:37:08.018756 env[1324]: time="2025-07-12T00:37:08.018717551Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d42fec67b342518352f397bb63d3e583ac00357e203617d6bdaef25410339430 pid=4140 runtime=io.containerd.runc.v2 Jul 12 00:37:08.063061 systemd-networkd[1098]: cali2ea13a1589a: Link UP Jul 12 00:37:08.071189 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2ea13a1589a: link becomes ready Jul 12 00:37:08.070660 systemd-networkd[1098]: cali2ea13a1589a: Gained carrier Jul 12 00:37:08.074549 systemd-resolved[1238]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:37:08.083552 env[1324]: 2025-07-12 00:37:07.651 [INFO][3909] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--nkzk8-eth0 coredns-7c65d6cfc9- kube-system 3888f770-0a64-4382-86fc-ba4105786dc9 956 0 2025-07-12 00:36:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-nkzk8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2ea13a1589a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1c6beb9006060003e8970e7571fb5d5eb34a7b5ede836e2d65b13dea1e35ea3c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nkzk8" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nkzk8-" Jul 12 00:37:08.083552 env[1324]: 2025-07-12 00:37:07.651 [INFO][3909] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1c6beb9006060003e8970e7571fb5d5eb34a7b5ede836e2d65b13dea1e35ea3c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nkzk8" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nkzk8-eth0" Jul 12 00:37:08.083552 env[1324]: 2025-07-12 00:37:07.808 [INFO][3962] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c6beb9006060003e8970e7571fb5d5eb34a7b5ede836e2d65b13dea1e35ea3c" HandleID="k8s-pod-network.1c6beb9006060003e8970e7571fb5d5eb34a7b5ede836e2d65b13dea1e35ea3c" Workload="localhost-k8s-coredns--7c65d6cfc9--nkzk8-eth0" Jul 12 00:37:08.083552 env[1324]: 2025-07-12 00:37:07.808 [INFO][3962] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1c6beb9006060003e8970e7571fb5d5eb34a7b5ede836e2d65b13dea1e35ea3c" HandleID="k8s-pod-network.1c6beb9006060003e8970e7571fb5d5eb34a7b5ede836e2d65b13dea1e35ea3c" Workload="localhost-k8s-coredns--7c65d6cfc9--nkzk8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c240), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-nkzk8", "timestamp":"2025-07-12 00:37:07.808702896 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:37:08.083552 env[1324]: 2025-07-12 00:37:07.808 [INFO][3962] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:37:08.083552 env[1324]: 2025-07-12 00:37:07.957 [INFO][3962] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:37:08.083552 env[1324]: 2025-07-12 00:37:07.958 [INFO][3962] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:37:08.083552 env[1324]: 2025-07-12 00:37:07.981 [INFO][3962] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1c6beb9006060003e8970e7571fb5d5eb34a7b5ede836e2d65b13dea1e35ea3c" host="localhost" Jul 12 00:37:08.083552 env[1324]: 2025-07-12 00:37:07.987 [INFO][3962] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:37:08.083552 env[1324]: 2025-07-12 00:37:08.038 [INFO][3962] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:37:08.083552 env[1324]: 2025-07-12 00:37:08.039 [INFO][3962] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:37:08.083552 env[1324]: 2025-07-12 00:37:08.042 [INFO][3962] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:37:08.083552 env[1324]: 2025-07-12 00:37:08.042 [INFO][3962] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1c6beb9006060003e8970e7571fb5d5eb34a7b5ede836e2d65b13dea1e35ea3c" host="localhost" Jul 12 00:37:08.083552 env[1324]: 2025-07-12 00:37:08.044 [INFO][3962] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1c6beb9006060003e8970e7571fb5d5eb34a7b5ede836e2d65b13dea1e35ea3c Jul 12 00:37:08.083552 env[1324]: 2025-07-12 00:37:08.051 [INFO][3962] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1c6beb9006060003e8970e7571fb5d5eb34a7b5ede836e2d65b13dea1e35ea3c" host="localhost" Jul 12 00:37:08.083552 env[1324]: 2025-07-12 00:37:08.057 [INFO][3962] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.1c6beb9006060003e8970e7571fb5d5eb34a7b5ede836e2d65b13dea1e35ea3c" host="localhost" Jul 12 00:37:08.083552 env[1324]: 2025-07-12 00:37:08.057 [INFO][3962] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.1c6beb9006060003e8970e7571fb5d5eb34a7b5ede836e2d65b13dea1e35ea3c" host="localhost" Jul 12 00:37:08.083552 env[1324]: 2025-07-12 00:37:08.057 [INFO][3962] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:37:08.083552 env[1324]: 2025-07-12 00:37:08.057 [INFO][3962] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="1c6beb9006060003e8970e7571fb5d5eb34a7b5ede836e2d65b13dea1e35ea3c" HandleID="k8s-pod-network.1c6beb9006060003e8970e7571fb5d5eb34a7b5ede836e2d65b13dea1e35ea3c" Workload="localhost-k8s-coredns--7c65d6cfc9--nkzk8-eth0" Jul 12 00:37:08.084204 env[1324]: 2025-07-12 00:37:08.059 [INFO][3909] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1c6beb9006060003e8970e7571fb5d5eb34a7b5ede836e2d65b13dea1e35ea3c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nkzk8" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nkzk8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--nkzk8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3888f770-0a64-4382-86fc-ba4105786dc9", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 36, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-nkzk8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ea13a1589a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:37:08.084204 env[1324]: 2025-07-12 00:37:08.059 [INFO][3909] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="1c6beb9006060003e8970e7571fb5d5eb34a7b5ede836e2d65b13dea1e35ea3c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nkzk8" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nkzk8-eth0" Jul 12 00:37:08.084204 env[1324]: 2025-07-12 00:37:08.059 [INFO][3909] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ea13a1589a ContainerID="1c6beb9006060003e8970e7571fb5d5eb34a7b5ede836e2d65b13dea1e35ea3c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nkzk8" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nkzk8-eth0" Jul 12 00:37:08.084204 env[1324]: 2025-07-12 00:37:08.063 [INFO][3909] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1c6beb9006060003e8970e7571fb5d5eb34a7b5ede836e2d65b13dea1e35ea3c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nkzk8" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nkzk8-eth0" Jul 12 00:37:08.084204 env[1324]: 2025-07-12 00:37:08.065 [INFO][3909] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1c6beb9006060003e8970e7571fb5d5eb34a7b5ede836e2d65b13dea1e35ea3c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nkzk8" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nkzk8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--nkzk8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3888f770-0a64-4382-86fc-ba4105786dc9", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 36, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1c6beb9006060003e8970e7571fb5d5eb34a7b5ede836e2d65b13dea1e35ea3c", Pod:"coredns-7c65d6cfc9-nkzk8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ea13a1589a", MAC:"2a:6d:8c:90:05:b2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:37:08.084204 env[1324]: 2025-07-12 00:37:08.076 [INFO][3909] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1c6beb9006060003e8970e7571fb5d5eb34a7b5ede836e2d65b13dea1e35ea3c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nkzk8" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nkzk8-eth0" Jul 12 00:37:08.100648 env[1324]: time="2025-07-12T00:37:08.099765006Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:37:08.100648 env[1324]: time="2025-07-12T00:37:08.099827011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:37:08.100648 env[1324]: time="2025-07-12T00:37:08.099837612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:37:08.100648 env[1324]: time="2025-07-12T00:37:08.099990625Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1c6beb9006060003e8970e7571fb5d5eb34a7b5ede836e2d65b13dea1e35ea3c pid=4187 runtime=io.containerd.runc.v2 Jul 12 00:37:08.108000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.108000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.108000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.108000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.108000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.108000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.108000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.108000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.108000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.108000 audit: BPF prog-id=30 op=LOAD Jul 12 00:37:08.108000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffb9a97a8 a2=40 a3=fffffb9a97d8 items=0 ppid=3790 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.108000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:37:08.108000 audit: BPF prog-id=30 op=UNLOAD Jul 12 00:37:08.108000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.108000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=fffffb9a98c0 a2=50 a3=0 items=0 ppid=3790 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.108000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:37:08.114646 env[1324]: time="2025-07-12T00:37:08.114589267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d9dcbc845-hgq6q,Uid:faaca0cb-1548-4256-82f4-00433e531079,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d42fec67b342518352f397bb63d3e583ac00357e203617d6bdaef25410339430\"" Jul 12 00:37:08.117000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.117000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffffb9a9818 a2=28 a3=fffffb9a9948 items=0 ppid=3790 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:37:08.117000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.117000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffb9a9848 a2=28 a3=fffffb9a9978 items=0 ppid=3790 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:37:08.117000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.117000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffb9a96f8 a2=28 a3=fffffb9a9828 items=0 ppid=3790 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:37:08.117000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.117000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffffb9a9868 a2=28 a3=fffffb9a9998 items=0 ppid=3790 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:37:08.117000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.117000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffffb9a9848 a2=28 a3=fffffb9a9978 items=0 ppid=3790 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:37:08.117000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.117000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffffb9a9838 a2=28 a3=fffffb9a9968 items=0 ppid=3790 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:37:08.117000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.117000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffffb9a9868 a2=28 a3=fffffb9a9998 items=0 ppid=3790 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:37:08.117000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.117000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffb9a9848 a2=28 a3=fffffb9a9978 items=0 ppid=3790 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:37:08.117000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.117000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffb9a9868 a2=28 a3=fffffb9a9998 items=0 ppid=3790 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:37:08.117000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.117000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffffb9a9838 a2=28 a3=fffffb9a9968 items=0 ppid=3790 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:37:08.117000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.117000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffffb9a98b8 a2=28 a3=fffffb9a99f8 items=0 ppid=3790 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:37:08.117000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.117000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=fffffb9a95f0 a2=50 a3=0 items=0 ppid=3790 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:37:08.117000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.117000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.117000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.117000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.117000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.117000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.117000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.117000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.117000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.117000 audit: BPF prog-id=31 op=LOAD Jul 12 00:37:08.117000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffb9a95f8 a2=94 a3=5 items=0 ppid=3790 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:37:08.118000 audit: BPF prog-id=31 op=UNLOAD Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.118000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=fffffb9a9700 a2=50 a3=0 items=0 ppid=3790 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.118000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.118000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=fffffb9a9848 a2=4 a3=3 items=0 ppid=3790 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.118000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { confidentiality } for pid=4105 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 12 00:37:08.118000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffffb9a9828 a2=94 a3=6 items=0 ppid=3790 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.118000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { confidentiality } for pid=4105 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 12 00:37:08.118000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffffb9a8ff8 a2=94 a3=83 items=0 ppid=3790 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.118000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { perfmon } for pid=4105 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.118000 audit[4105]: AVC avc: denied { confidentiality } for pid=4105 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 12 00:37:08.118000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffffb9a8ff8 a2=94 a3=83 items=0 ppid=3790 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.118000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:37:08.119000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.119000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffffb9aaa38 a2=10 a3=fffffb9aab28 items=0 ppid=3790 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.119000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:37:08.122000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.122000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffffb9aa8f8 a2=10 a3=fffffb9aa9e8 items=0 ppid=3790 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.122000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:37:08.122000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.122000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffffb9aa868 a2=10 a3=fffffb9aa9e8 items=0 ppid=3790 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.122000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:37:08.122000 audit[4105]: AVC avc: denied { bpf } for pid=4105 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 12 00:37:08.122000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffffb9aa868 a2=10 a3=fffffb9aa9e8 items=0 ppid=3790 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.122000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 12 00:37:08.133000 audit: BPF prog-id=26 op=UNLOAD Jul 12 00:37:08.144273 systemd-resolved[1238]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:37:08.163394 env[1324]: time="2025-07-12T00:37:08.163339174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nkzk8,Uid:3888f770-0a64-4382-86fc-ba4105786dc9,Namespace:kube-system,Attempt:1,} returns sandbox id \"1c6beb9006060003e8970e7571fb5d5eb34a7b5ede836e2d65b13dea1e35ea3c\"" Jul 12 00:37:08.164702 kubelet[2103]: E0712 00:37:08.164159 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:37:08.166749 env[1324]: time="2025-07-12T00:37:08.166714821Z" level=info msg="CreateContainer within sandbox \"1c6beb9006060003e8970e7571fb5d5eb34a7b5ede836e2d65b13dea1e35ea3c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:37:08.181522 env[1324]: time="2025-07-12T00:37:08.181461356Z" level=info msg="CreateContainer within sandbox \"1c6beb9006060003e8970e7571fb5d5eb34a7b5ede836e2d65b13dea1e35ea3c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6eff5623a11bfc7a8434a34fcb41022b20f66be99006b6bd932633e03366fef2\"" Jul 12 00:37:08.182179 env[1324]: time="2025-07-12T00:37:08.182145134Z" level=info msg="StartContainer for \"6eff5623a11bfc7a8434a34fcb41022b20f66be99006b6bd932633e03366fef2\"" Jul 12 00:37:08.212000 audit[4275]: NETFILTER_CFG table=mangle:103 family=2 entries=16 op=nft_register_chain pid=4275 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 12 00:37:08.212000 audit[4275]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffed3eb260 a2=0 a3=ffff8f048fa8 items=0 ppid=3790 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.212000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 12 00:37:08.223000 audit[4277]: NETFILTER_CFG table=raw:104 family=2 entries=21 op=nft_register_chain pid=4277 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 12 00:37:08.223000 audit[4277]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffdd449ae0 a2=0 a3=ffff8367afa8 items=0 ppid=3790 pid=4277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.223000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 12 00:37:08.231000 audit[4276]: NETFILTER_CFG table=nat:105 family=2 entries=15 op=nft_register_chain pid=4276 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 12 00:37:08.231000 audit[4276]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=fffff2b6d2c0 a2=0 a3=ffffba952fa8 items=0 ppid=3790 pid=4276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.231000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 12 00:37:08.240610 env[1324]: time="2025-07-12T00:37:08.240562103Z" level=info msg="StartContainer for \"6eff5623a11bfc7a8434a34fcb41022b20f66be99006b6bd932633e03366fef2\" returns successfully" Jul 12 00:37:08.237000 audit[4280]: NETFILTER_CFG table=filter:106 family=2 entries=94 op=nft_register_chain pid=4280 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 12 00:37:08.237000 audit[4280]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=fffff3273d80 a2=0 a3=ffffa37bffa8 items=0 ppid=3790 pid=4280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.237000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 12 00:37:08.284000 audit[4300]: NETFILTER_CFG table=filter:107 family=2 entries=112 op=nft_register_chain pid=4300 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 12 00:37:08.284000 audit[4300]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=64536 a0=3 a1=ffffd4ce6f00 a2=0 a3=ffffb2776fa8 items=0 ppid=3790 pid=4300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.284000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 12 00:37:08.405462 env[1324]: time="2025-07-12T00:37:08.405234672Z" level=info msg="StopPodSandbox for \"f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7\"" Jul 12 00:37:08.491651 env[1324]: 2025-07-12 00:37:08.453 [INFO][4319] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" Jul 12 00:37:08.491651 env[1324]: 2025-07-12 00:37:08.453 [INFO][4319] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" iface="eth0" netns="/var/run/netns/cni-88b21826-df9c-d8e5-783d-dc92f4d9b59d" Jul 12 00:37:08.491651 env[1324]: 2025-07-12 00:37:08.454 [INFO][4319] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" iface="eth0" netns="/var/run/netns/cni-88b21826-df9c-d8e5-783d-dc92f4d9b59d" Jul 12 00:37:08.491651 env[1324]: 2025-07-12 00:37:08.454 [INFO][4319] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" iface="eth0" netns="/var/run/netns/cni-88b21826-df9c-d8e5-783d-dc92f4d9b59d" Jul 12 00:37:08.491651 env[1324]: 2025-07-12 00:37:08.454 [INFO][4319] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" Jul 12 00:37:08.491651 env[1324]: 2025-07-12 00:37:08.454 [INFO][4319] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" Jul 12 00:37:08.491651 env[1324]: 2025-07-12 00:37:08.477 [INFO][4327] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" HandleID="k8s-pod-network.f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" Workload="localhost-k8s-coredns--7c65d6cfc9--dfvjl-eth0" Jul 12 00:37:08.491651 env[1324]: 2025-07-12 00:37:08.478 [INFO][4327] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:37:08.491651 env[1324]: 2025-07-12 00:37:08.478 [INFO][4327] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:37:08.491651 env[1324]: 2025-07-12 00:37:08.486 [WARNING][4327] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" HandleID="k8s-pod-network.f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" Workload="localhost-k8s-coredns--7c65d6cfc9--dfvjl-eth0" Jul 12 00:37:08.491651 env[1324]: 2025-07-12 00:37:08.486 [INFO][4327] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" HandleID="k8s-pod-network.f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" Workload="localhost-k8s-coredns--7c65d6cfc9--dfvjl-eth0" Jul 12 00:37:08.491651 env[1324]: 2025-07-12 00:37:08.488 [INFO][4327] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:37:08.491651 env[1324]: 2025-07-12 00:37:08.489 [INFO][4319] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" Jul 12 00:37:08.492341 env[1324]: time="2025-07-12T00:37:08.491796236Z" level=info msg="TearDown network for sandbox \"f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7\" successfully" Jul 12 00:37:08.492341 env[1324]: time="2025-07-12T00:37:08.491828278Z" level=info msg="StopPodSandbox for \"f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7\" returns successfully" Jul 12 00:37:08.492462 kubelet[2103]: E0712 00:37:08.492141 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:37:08.494809 env[1324]: time="2025-07-12T00:37:08.494766488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dfvjl,Uid:3fa0a624-ecf9-48dd-83d5-27860d361813,Namespace:kube-system,Attempt:1,}" Jul 12 00:37:08.540992 systemd[1]: run-netns-cni\x2d9860a2b6\x2dddfe\x2de644\x2da928\x2d51a430134d5b.mount: Deactivated successfully. Jul 12 00:37:08.541120 systemd[1]: run-netns-cni\x2d88b21826\x2ddf9c\x2dd8e5\x2d783d\x2ddc92f4d9b59d.mount: Deactivated successfully. Jul 12 00:37:08.547953 kubelet[2103]: E0712 00:37:08.547826 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:37:08.569354 kubelet[2103]: I0712 00:37:08.569262 2103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-nkzk8" podStartSLOduration=35.569234063 podStartE2EDuration="35.569234063s" podCreationTimestamp="2025-07-12 00:36:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:37:08.55873717 +0000 UTC m=+43.233944068" watchObservedRunningTime="2025-07-12 00:37:08.569234063 +0000 UTC m=+43.244440961" Jul 12 00:37:08.586000 audit[4356]: NETFILTER_CFG table=filter:108 family=2 entries=18 op=nft_register_rule pid=4356 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:08.586000 audit[4356]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffffaae6710 a2=0 a3=1 items=0 ppid=2213 pid=4356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.586000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:08.593000 audit[4356]: NETFILTER_CFG table=nat:109 family=2 entries=16 op=nft_register_rule pid=4356 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:08.593000 audit[4356]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4236 a0=3 a1=fffffaae6710 a2=0 a3=1 items=0 ppid=2213 pid=4356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.593000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:08.602000 audit[4359]: NETFILTER_CFG table=filter:110 family=2 entries=15 op=nft_register_rule pid=4359 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:08.602000 audit[4359]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffcfc83780 a2=0 a3=1 items=0 ppid=2213 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.602000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:08.608000 audit[4359]: NETFILTER_CFG table=nat:111 family=2 entries=37 op=nft_register_chain pid=4359 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:08.608000 audit[4359]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14964 a0=3 a1=ffffcfc83780 a2=0 a3=1 items=0 ppid=2213 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.608000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:08.632641 systemd-networkd[1098]: calic9f3c3023a5: Link UP Jul 12 00:37:08.633871 systemd-networkd[1098]: calic9f3c3023a5: Gained carrier Jul 12 00:37:08.634444 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic9f3c3023a5: link becomes ready Jul 12 00:37:08.651207 env[1324]: 2025-07-12 00:37:08.545 [INFO][4334] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--dfvjl-eth0 coredns-7c65d6cfc9- kube-system 3fa0a624-ecf9-48dd-83d5-27860d361813 984 0 2025-07-12 00:36:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-dfvjl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic9f3c3023a5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="faaf728a8da2fb1c8bb9684deace59b3182595d8561b55197da6abcd92388370" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dfvjl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dfvjl-" Jul 12 00:37:08.651207 env[1324]: 2025-07-12 00:37:08.545 [INFO][4334] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="faaf728a8da2fb1c8bb9684deace59b3182595d8561b55197da6abcd92388370" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dfvjl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dfvjl-eth0" Jul 12 00:37:08.651207 env[1324]: 2025-07-12 00:37:08.588 [INFO][4349] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="faaf728a8da2fb1c8bb9684deace59b3182595d8561b55197da6abcd92388370" HandleID="k8s-pod-network.faaf728a8da2fb1c8bb9684deace59b3182595d8561b55197da6abcd92388370" Workload="localhost-k8s-coredns--7c65d6cfc9--dfvjl-eth0" Jul 12 00:37:08.651207 env[1324]: 2025-07-12 00:37:08.588 [INFO][4349] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="faaf728a8da2fb1c8bb9684deace59b3182595d8561b55197da6abcd92388370" HandleID="k8s-pod-network.faaf728a8da2fb1c8bb9684deace59b3182595d8561b55197da6abcd92388370" Workload="localhost-k8s-coredns--7c65d6cfc9--dfvjl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400012f720), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-dfvjl", "timestamp":"2025-07-12 00:37:08.588288444 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:37:08.651207 env[1324]: 2025-07-12 00:37:08.588 [INFO][4349] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:37:08.651207 env[1324]: 2025-07-12 00:37:08.588 [INFO][4349] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:37:08.651207 env[1324]: 2025-07-12 00:37:08.588 [INFO][4349] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:37:08.651207 env[1324]: 2025-07-12 00:37:08.597 [INFO][4349] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.faaf728a8da2fb1c8bb9684deace59b3182595d8561b55197da6abcd92388370" host="localhost" Jul 12 00:37:08.651207 env[1324]: 2025-07-12 00:37:08.602 [INFO][4349] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:37:08.651207 env[1324]: 2025-07-12 00:37:08.612 [INFO][4349] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:37:08.651207 env[1324]: 2025-07-12 00:37:08.614 [INFO][4349] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:37:08.651207 env[1324]: 2025-07-12 00:37:08.616 [INFO][4349] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:37:08.651207 env[1324]: 2025-07-12 00:37:08.617 [INFO][4349] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.faaf728a8da2fb1c8bb9684deace59b3182595d8561b55197da6abcd92388370" host="localhost" Jul 12 00:37:08.651207 env[1324]: 2025-07-12 00:37:08.618 [INFO][4349] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.faaf728a8da2fb1c8bb9684deace59b3182595d8561b55197da6abcd92388370 Jul 12 00:37:08.651207 env[1324]: 2025-07-12 00:37:08.621 [INFO][4349] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.faaf728a8da2fb1c8bb9684deace59b3182595d8561b55197da6abcd92388370" host="localhost" Jul 12 00:37:08.651207 env[1324]: 2025-07-12 00:37:08.628 [INFO][4349] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.faaf728a8da2fb1c8bb9684deace59b3182595d8561b55197da6abcd92388370" host="localhost" Jul 12 00:37:08.651207 env[1324]: 2025-07-12 00:37:08.628 [INFO][4349] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.faaf728a8da2fb1c8bb9684deace59b3182595d8561b55197da6abcd92388370" host="localhost" Jul 12 00:37:08.651207 env[1324]: 2025-07-12 00:37:08.628 [INFO][4349] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:37:08.651207 env[1324]: 2025-07-12 00:37:08.628 [INFO][4349] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="faaf728a8da2fb1c8bb9684deace59b3182595d8561b55197da6abcd92388370" HandleID="k8s-pod-network.faaf728a8da2fb1c8bb9684deace59b3182595d8561b55197da6abcd92388370" Workload="localhost-k8s-coredns--7c65d6cfc9--dfvjl-eth0" Jul 12 00:37:08.651793 env[1324]: 2025-07-12 00:37:08.631 [INFO][4334] cni-plugin/k8s.go 418: Populated endpoint ContainerID="faaf728a8da2fb1c8bb9684deace59b3182595d8561b55197da6abcd92388370" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dfvjl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dfvjl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--dfvjl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3fa0a624-ecf9-48dd-83d5-27860d361813", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 36, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-dfvjl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic9f3c3023a5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:37:08.651793 env[1324]: 2025-07-12 00:37:08.631 [INFO][4334] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="faaf728a8da2fb1c8bb9684deace59b3182595d8561b55197da6abcd92388370" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dfvjl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dfvjl-eth0" Jul 12 00:37:08.651793 env[1324]: 2025-07-12 00:37:08.631 [INFO][4334] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9f3c3023a5 ContainerID="faaf728a8da2fb1c8bb9684deace59b3182595d8561b55197da6abcd92388370" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dfvjl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dfvjl-eth0" Jul 12 00:37:08.651793 env[1324]: 2025-07-12 00:37:08.634 [INFO][4334] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="faaf728a8da2fb1c8bb9684deace59b3182595d8561b55197da6abcd92388370" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dfvjl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dfvjl-eth0" Jul 12 00:37:08.651793 env[1324]: 2025-07-12 00:37:08.635 [INFO][4334] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="faaf728a8da2fb1c8bb9684deace59b3182595d8561b55197da6abcd92388370" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dfvjl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dfvjl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--dfvjl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3fa0a624-ecf9-48dd-83d5-27860d361813", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 36, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"faaf728a8da2fb1c8bb9684deace59b3182595d8561b55197da6abcd92388370", Pod:"coredns-7c65d6cfc9-dfvjl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic9f3c3023a5", MAC:"f6:f5:c5:34:21:f8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:37:08.651793 env[1324]: 2025-07-12 00:37:08.648 [INFO][4334] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="faaf728a8da2fb1c8bb9684deace59b3182595d8561b55197da6abcd92388370" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dfvjl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dfvjl-eth0" Jul 12 00:37:08.659000 audit[4375]: NETFILTER_CFG table=filter:112 family=2 entries=44 op=nft_register_chain pid=4375 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 12 00:37:08.659000 audit[4375]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21532 a0=3 a1=ffffd5368590 a2=0 a3=ffff93ac6fa8 items=0 ppid=3790 pid=4375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:08.659000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 12 00:37:08.664960 env[1324]: time="2025-07-12T00:37:08.664890561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:37:08.665085 env[1324]: time="2025-07-12T00:37:08.664936205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:37:08.665085 env[1324]: time="2025-07-12T00:37:08.664946486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:37:08.665365 env[1324]: time="2025-07-12T00:37:08.665317877Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/faaf728a8da2fb1c8bb9684deace59b3182595d8561b55197da6abcd92388370 pid=4380 runtime=io.containerd.runc.v2 Jul 12 00:37:08.683654 systemd[1]: run-containerd-runc-k8s.io-faaf728a8da2fb1c8bb9684deace59b3182595d8561b55197da6abcd92388370-runc.P1fkwS.mount: Deactivated successfully. Jul 12 00:37:08.714469 systemd-resolved[1238]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:37:08.738044 env[1324]: time="2025-07-12T00:37:08.738006821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dfvjl,Uid:3fa0a624-ecf9-48dd-83d5-27860d361813,Namespace:kube-system,Attempt:1,} returns sandbox id \"faaf728a8da2fb1c8bb9684deace59b3182595d8561b55197da6abcd92388370\"" Jul 12 00:37:08.742190 kubelet[2103]: E0712 00:37:08.742155 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:37:08.744533 env[1324]: time="2025-07-12T00:37:08.744472491Z" level=info msg="CreateContainer within sandbox \"faaf728a8da2fb1c8bb9684deace59b3182595d8561b55197da6abcd92388370\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:37:08.753916 env[1324]: time="2025-07-12T00:37:08.753863570Z" level=info msg="CreateContainer within sandbox \"faaf728a8da2fb1c8bb9684deace59b3182595d8561b55197da6abcd92388370\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f64415126d259c2d6e30dfb21a3bb1a4c325d14ca47276720b7fbd79bfd18d45\"" Jul 12 00:37:08.754457 env[1324]: time="2025-07-12T00:37:08.754430218Z" level=info msg="StartContainer for \"f64415126d259c2d6e30dfb21a3bb1a4c325d14ca47276720b7fbd79bfd18d45\"" Jul 12 00:37:08.809729 env[1324]: time="2025-07-12T00:37:08.809622593Z" level=info msg="StartContainer for \"f64415126d259c2d6e30dfb21a3bb1a4c325d14ca47276720b7fbd79bfd18d45\" returns successfully" Jul 12 00:37:08.915531 systemd-networkd[1098]: calife389d88381: Gained IPv6LL Jul 12 00:37:09.170510 systemd-networkd[1098]: calia47a7df3e05: Gained IPv6LL Jul 12 00:37:09.410161 env[1324]: time="2025-07-12T00:37:09.409569340Z" level=info msg="StopPodSandbox for \"37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61\"" Jul 12 00:37:09.427645 systemd-networkd[1098]: cali2ea13a1589a: Gained IPv6LL Jul 12 00:37:09.544819 env[1324]: 2025-07-12 00:37:09.469 [INFO][4464] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" Jul 12 00:37:09.544819 env[1324]: 2025-07-12 00:37:09.469 [INFO][4464] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" iface="eth0" netns="/var/run/netns/cni-63197fcf-a864-69ea-bf3d-6b05d18cc774" Jul 12 00:37:09.544819 env[1324]: 2025-07-12 00:37:09.469 [INFO][4464] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" iface="eth0" netns="/var/run/netns/cni-63197fcf-a864-69ea-bf3d-6b05d18cc774" Jul 12 00:37:09.544819 env[1324]: 2025-07-12 00:37:09.469 [INFO][4464] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" iface="eth0" netns="/var/run/netns/cni-63197fcf-a864-69ea-bf3d-6b05d18cc774" Jul 12 00:37:09.544819 env[1324]: 2025-07-12 00:37:09.469 [INFO][4464] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" Jul 12 00:37:09.544819 env[1324]: 2025-07-12 00:37:09.469 [INFO][4464] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" Jul 12 00:37:09.544819 env[1324]: 2025-07-12 00:37:09.529 [INFO][4472] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" HandleID="k8s-pod-network.37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" Workload="localhost-k8s-calico--apiserver--7d9dcbc845--6nfvq-eth0" Jul 12 00:37:09.544819 env[1324]: 2025-07-12 00:37:09.529 [INFO][4472] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:37:09.544819 env[1324]: 2025-07-12 00:37:09.529 [INFO][4472] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:37:09.544819 env[1324]: 2025-07-12 00:37:09.540 [WARNING][4472] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" HandleID="k8s-pod-network.37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" Workload="localhost-k8s-calico--apiserver--7d9dcbc845--6nfvq-eth0" Jul 12 00:37:09.544819 env[1324]: 2025-07-12 00:37:09.540 [INFO][4472] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" HandleID="k8s-pod-network.37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" Workload="localhost-k8s-calico--apiserver--7d9dcbc845--6nfvq-eth0" Jul 12 00:37:09.544819 env[1324]: 2025-07-12 00:37:09.541 [INFO][4472] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:37:09.544819 env[1324]: 2025-07-12 00:37:09.543 [INFO][4464] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" Jul 12 00:37:09.547067 systemd[1]: run-netns-cni\x2d63197fcf\x2da864\x2d69ea\x2dbf3d\x2d6b05d18cc774.mount: Deactivated successfully. Jul 12 00:37:09.547663 env[1324]: time="2025-07-12T00:37:09.547625956Z" level=info msg="TearDown network for sandbox \"37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61\" successfully" Jul 12 00:37:09.547998 env[1324]: time="2025-07-12T00:37:09.547967785Z" level=info msg="StopPodSandbox for \"37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61\" returns successfully" Jul 12 00:37:09.548690 env[1324]: time="2025-07-12T00:37:09.548662122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d9dcbc845-6nfvq,Uid:fecfff6f-79d3-4090-96c1-83913da0527a,Namespace:calico-apiserver,Attempt:1,}" Jul 12 00:37:09.562217 systemd-networkd[1098]: vxlan.calico: Gained IPv6LL Jul 12 00:37:09.565902 kubelet[2103]: E0712 00:37:09.565872 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:37:09.566065 kubelet[2103]: E0712 00:37:09.565939 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:37:09.580075 kubelet[2103]: I0712 00:37:09.579953 2103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-dfvjl" podStartSLOduration=36.579937038 podStartE2EDuration="36.579937038s" podCreationTimestamp="2025-07-12 00:36:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:37:09.579555726 +0000 UTC m=+44.254762624" watchObservedRunningTime="2025-07-12 00:37:09.579937038 +0000 UTC m=+44.255143936" Jul 12 00:37:09.588000 audit[4493]: NETFILTER_CFG table=filter:113 family=2 entries=12 op=nft_register_rule pid=4493 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:09.588000 audit[4493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffe2f54850 a2=0 a3=1 items=0 ppid=2213 pid=4493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:09.588000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:09.595000 audit[4493]: NETFILTER_CFG table=nat:114 family=2 entries=46 op=nft_register_rule pid=4493 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:09.595000 audit[4493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14964 a0=3 a1=ffffe2f54850 a2=0 a3=1 items=0 ppid=2213 pid=4493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:09.595000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:09.719091 systemd-networkd[1098]: cali7d6c70dc994: Link UP Jul 12 00:37:09.721650 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 12 00:37:09.721743 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7d6c70dc994: link becomes ready Jul 12 00:37:09.721759 systemd-networkd[1098]: cali7d6c70dc994: Gained carrier Jul 12 00:37:09.779153 env[1324]: 2025-07-12 00:37:09.601 [INFO][4479] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7d9dcbc845--6nfvq-eth0 calico-apiserver-7d9dcbc845- calico-apiserver fecfff6f-79d3-4090-96c1-83913da0527a 1007 0 2025-07-12 00:36:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d9dcbc845 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7d9dcbc845-6nfvq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7d6c70dc994 [] [] }} ContainerID="dc2bb6f1488e180aeb18054610bb6436630079b8cf6e25740c7d44b9fcd201d7" Namespace="calico-apiserver" Pod="calico-apiserver-7d9dcbc845-6nfvq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d9dcbc845--6nfvq-" Jul 12 00:37:09.779153 env[1324]: 2025-07-12 00:37:09.601 [INFO][4479] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dc2bb6f1488e180aeb18054610bb6436630079b8cf6e25740c7d44b9fcd201d7" Namespace="calico-apiserver" Pod="calico-apiserver-7d9dcbc845-6nfvq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d9dcbc845--6nfvq-eth0" Jul 12 00:37:09.779153 env[1324]: 2025-07-12 00:37:09.624 [INFO][4497] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dc2bb6f1488e180aeb18054610bb6436630079b8cf6e25740c7d44b9fcd201d7" HandleID="k8s-pod-network.dc2bb6f1488e180aeb18054610bb6436630079b8cf6e25740c7d44b9fcd201d7" Workload="localhost-k8s-calico--apiserver--7d9dcbc845--6nfvq-eth0" Jul 12 00:37:09.779153 env[1324]: 2025-07-12 00:37:09.624 [INFO][4497] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dc2bb6f1488e180aeb18054610bb6436630079b8cf6e25740c7d44b9fcd201d7" HandleID="k8s-pod-network.dc2bb6f1488e180aeb18054610bb6436630079b8cf6e25740c7d44b9fcd201d7" Workload="localhost-k8s-calico--apiserver--7d9dcbc845--6nfvq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3990), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7d9dcbc845-6nfvq", "timestamp":"2025-07-12 00:37:09.624284398 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:37:09.779153 env[1324]: 2025-07-12 00:37:09.624 [INFO][4497] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:37:09.779153 env[1324]: 2025-07-12 00:37:09.624 [INFO][4497] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:37:09.779153 env[1324]: 2025-07-12 00:37:09.624 [INFO][4497] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:37:09.779153 env[1324]: 2025-07-12 00:37:09.634 [INFO][4497] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dc2bb6f1488e180aeb18054610bb6436630079b8cf6e25740c7d44b9fcd201d7" host="localhost" Jul 12 00:37:09.779153 env[1324]: 2025-07-12 00:37:09.642 [INFO][4497] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:37:09.779153 env[1324]: 2025-07-12 00:37:09.647 [INFO][4497] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:37:09.779153 env[1324]: 2025-07-12 00:37:09.658 [INFO][4497] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:37:09.779153 env[1324]: 2025-07-12 00:37:09.661 [INFO][4497] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:37:09.779153 env[1324]: 2025-07-12 00:37:09.661 [INFO][4497] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dc2bb6f1488e180aeb18054610bb6436630079b8cf6e25740c7d44b9fcd201d7" host="localhost" Jul 12 00:37:09.779153 env[1324]: 2025-07-12 00:37:09.665 [INFO][4497] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dc2bb6f1488e180aeb18054610bb6436630079b8cf6e25740c7d44b9fcd201d7 Jul 12 00:37:09.779153 env[1324]: 2025-07-12 00:37:09.676 [INFO][4497] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dc2bb6f1488e180aeb18054610bb6436630079b8cf6e25740c7d44b9fcd201d7" host="localhost" Jul 12 00:37:09.779153 env[1324]: 2025-07-12 00:37:09.711 [INFO][4497] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.dc2bb6f1488e180aeb18054610bb6436630079b8cf6e25740c7d44b9fcd201d7" host="localhost" Jul 12 00:37:09.779153 env[1324]: 2025-07-12 00:37:09.711 [INFO][4497] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.dc2bb6f1488e180aeb18054610bb6436630079b8cf6e25740c7d44b9fcd201d7" host="localhost" Jul 12 00:37:09.779153 env[1324]: 2025-07-12 00:37:09.711 [INFO][4497] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:37:09.779153 env[1324]: 2025-07-12 00:37:09.711 [INFO][4497] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="dc2bb6f1488e180aeb18054610bb6436630079b8cf6e25740c7d44b9fcd201d7" HandleID="k8s-pod-network.dc2bb6f1488e180aeb18054610bb6436630079b8cf6e25740c7d44b9fcd201d7" Workload="localhost-k8s-calico--apiserver--7d9dcbc845--6nfvq-eth0" Jul 12 00:37:09.779801 env[1324]: 2025-07-12 00:37:09.715 [INFO][4479] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dc2bb6f1488e180aeb18054610bb6436630079b8cf6e25740c7d44b9fcd201d7" Namespace="calico-apiserver" Pod="calico-apiserver-7d9dcbc845-6nfvq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d9dcbc845--6nfvq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d9dcbc845--6nfvq-eth0", GenerateName:"calico-apiserver-7d9dcbc845-", Namespace:"calico-apiserver", SelfLink:"", UID:"fecfff6f-79d3-4090-96c1-83913da0527a", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 36, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d9dcbc845", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7d9dcbc845-6nfvq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7d6c70dc994", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:37:09.779801 env[1324]: 2025-07-12 00:37:09.715 [INFO][4479] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="dc2bb6f1488e180aeb18054610bb6436630079b8cf6e25740c7d44b9fcd201d7" Namespace="calico-apiserver" Pod="calico-apiserver-7d9dcbc845-6nfvq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d9dcbc845--6nfvq-eth0" Jul 12 00:37:09.779801 env[1324]: 2025-07-12 00:37:09.715 [INFO][4479] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7d6c70dc994 ContainerID="dc2bb6f1488e180aeb18054610bb6436630079b8cf6e25740c7d44b9fcd201d7" Namespace="calico-apiserver" Pod="calico-apiserver-7d9dcbc845-6nfvq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d9dcbc845--6nfvq-eth0" Jul 12 00:37:09.779801 env[1324]: 2025-07-12 00:37:09.722 [INFO][4479] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dc2bb6f1488e180aeb18054610bb6436630079b8cf6e25740c7d44b9fcd201d7" Namespace="calico-apiserver" Pod="calico-apiserver-7d9dcbc845-6nfvq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d9dcbc845--6nfvq-eth0" Jul 12 00:37:09.779801 env[1324]: 2025-07-12 00:37:09.722 [INFO][4479] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dc2bb6f1488e180aeb18054610bb6436630079b8cf6e25740c7d44b9fcd201d7" Namespace="calico-apiserver" Pod="calico-apiserver-7d9dcbc845-6nfvq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d9dcbc845--6nfvq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d9dcbc845--6nfvq-eth0", GenerateName:"calico-apiserver-7d9dcbc845-", Namespace:"calico-apiserver", SelfLink:"", UID:"fecfff6f-79d3-4090-96c1-83913da0527a", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 36, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d9dcbc845", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dc2bb6f1488e180aeb18054610bb6436630079b8cf6e25740c7d44b9fcd201d7", Pod:"calico-apiserver-7d9dcbc845-6nfvq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7d6c70dc994", MAC:"a6:31:4d:87:21:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:37:09.779801 env[1324]: 2025-07-12 00:37:09.776 [INFO][4479] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dc2bb6f1488e180aeb18054610bb6436630079b8cf6e25740c7d44b9fcd201d7" Namespace="calico-apiserver" Pod="calico-apiserver-7d9dcbc845-6nfvq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d9dcbc845--6nfvq-eth0" Jul 12 00:37:09.789000 audit[4514]: NETFILTER_CFG table=filter:115 family=2 entries=59 op=nft_register_chain pid=4514 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 12 00:37:09.789000 audit[4514]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29492 a0=3 a1=ffffeabe4940 a2=0 a3=ffff8c522fa8 items=0 ppid=3790 pid=4514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:09.789000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 12 00:37:09.808113 env[1324]: time="2025-07-12T00:37:09.808052768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:37:09.808237 env[1324]: time="2025-07-12T00:37:09.808105212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:37:09.808237 env[1324]: time="2025-07-12T00:37:09.808116293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:37:09.808397 env[1324]: time="2025-07-12T00:37:09.808353033Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc2bb6f1488e180aeb18054610bb6436630079b8cf6e25740c7d44b9fcd201d7 pid=4522 runtime=io.containerd.runc.v2 Jul 12 00:37:09.853686 systemd-resolved[1238]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:37:09.877519 env[1324]: time="2025-07-12T00:37:09.877469968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d9dcbc845-6nfvq,Uid:fecfff6f-79d3-4090-96c1-83913da0527a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"dc2bb6f1488e180aeb18054610bb6436630079b8cf6e25740c7d44b9fcd201d7\"" Jul 12 00:37:10.405460 env[1324]: time="2025-07-12T00:37:10.405419470Z" level=info msg="StopPodSandbox for \"6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c\"" Jul 12 00:37:10.405460 env[1324]: time="2025-07-12T00:37:10.405369746Z" level=info msg="StopPodSandbox for \"7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5\"" Jul 12 00:37:10.431527 env[1324]: time="2025-07-12T00:37:10.431481421Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:10.433856 env[1324]: time="2025-07-12T00:37:10.433809330Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:10.435922 env[1324]: time="2025-07-12T00:37:10.435887939Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:10.437570 env[1324]: time="2025-07-12T00:37:10.437537592Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:10.438097 env[1324]: time="2025-07-12T00:37:10.438060395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 12 00:37:10.439942 env[1324]: time="2025-07-12T00:37:10.439911865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 00:37:10.451540 systemd-networkd[1098]: calic9f3c3023a5: Gained IPv6LL Jul 12 00:37:10.454011 env[1324]: time="2025-07-12T00:37:10.453956083Z" level=info msg="CreateContainer within sandbox \"515cd1ebaa213931794fb390e47af7448adb46df8bca52119c4de102ff5d30a7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 12 00:37:10.468457 env[1324]: time="2025-07-12T00:37:10.466636430Z" level=info msg="CreateContainer within sandbox \"515cd1ebaa213931794fb390e47af7448adb46df8bca52119c4de102ff5d30a7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"91cace5e737b247a4afc2cde5e5915e43dcdd8d8e553229a17eada8168cc8855\"" Jul 12 00:37:10.468861 env[1324]: time="2025-07-12T00:37:10.468827368Z" level=info msg="StartContainer for \"91cace5e737b247a4afc2cde5e5915e43dcdd8d8e553229a17eada8168cc8855\"" Jul 12 00:37:10.508160 env[1324]: 2025-07-12 00:37:10.457 [INFO][4578] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" Jul 12 00:37:10.508160 env[1324]: 2025-07-12 00:37:10.457 [INFO][4578] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" iface="eth0" netns="/var/run/netns/cni-d62fc471-19a9-a808-d2c9-e8e1d6196dce" Jul 12 00:37:10.508160 env[1324]: 2025-07-12 00:37:10.457 [INFO][4578] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" iface="eth0" netns="/var/run/netns/cni-d62fc471-19a9-a808-d2c9-e8e1d6196dce" Jul 12 00:37:10.508160 env[1324]: 2025-07-12 00:37:10.458 [INFO][4578] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" iface="eth0" netns="/var/run/netns/cni-d62fc471-19a9-a808-d2c9-e8e1d6196dce" Jul 12 00:37:10.508160 env[1324]: 2025-07-12 00:37:10.458 [INFO][4578] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" Jul 12 00:37:10.508160 env[1324]: 2025-07-12 00:37:10.458 [INFO][4578] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" Jul 12 00:37:10.508160 env[1324]: 2025-07-12 00:37:10.484 [INFO][4596] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" HandleID="k8s-pod-network.6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" Workload="localhost-k8s-goldmane--58fd7646b9--6mdg7-eth0" Jul 12 00:37:10.508160 env[1324]: 2025-07-12 00:37:10.484 [INFO][4596] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:37:10.508160 env[1324]: 2025-07-12 00:37:10.484 [INFO][4596] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:37:10.508160 env[1324]: 2025-07-12 00:37:10.496 [WARNING][4596] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" HandleID="k8s-pod-network.6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" Workload="localhost-k8s-goldmane--58fd7646b9--6mdg7-eth0" Jul 12 00:37:10.508160 env[1324]: 2025-07-12 00:37:10.496 [INFO][4596] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" HandleID="k8s-pod-network.6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" Workload="localhost-k8s-goldmane--58fd7646b9--6mdg7-eth0" Jul 12 00:37:10.508160 env[1324]: 2025-07-12 00:37:10.498 [INFO][4596] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:37:10.508160 env[1324]: 2025-07-12 00:37:10.499 [INFO][4578] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" Jul 12 00:37:10.509021 env[1324]: time="2025-07-12T00:37:10.508976101Z" level=info msg="TearDown network for sandbox \"6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c\" successfully" Jul 12 00:37:10.509155 env[1324]: time="2025-07-12T00:37:10.509125793Z" level=info msg="StopPodSandbox for \"6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c\" returns successfully" Jul 12 00:37:10.510052 env[1324]: time="2025-07-12T00:37:10.510014105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-6mdg7,Uid:118681c8-63c7-4aed-ae42-07c9da34ea65,Namespace:calico-system,Attempt:1,}" Jul 12 00:37:10.528037 env[1324]: 2025-07-12 00:37:10.474 [INFO][4579] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" Jul 12 00:37:10.528037 env[1324]: 2025-07-12 00:37:10.474 [INFO][4579] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" iface="eth0" netns="/var/run/netns/cni-5c9cf655-ae60-123e-dea9-af0d44872c62" Jul 12 00:37:10.528037 env[1324]: 2025-07-12 00:37:10.474 [INFO][4579] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" iface="eth0" netns="/var/run/netns/cni-5c9cf655-ae60-123e-dea9-af0d44872c62" Jul 12 00:37:10.528037 env[1324]: 2025-07-12 00:37:10.474 [INFO][4579] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" iface="eth0" netns="/var/run/netns/cni-5c9cf655-ae60-123e-dea9-af0d44872c62" Jul 12 00:37:10.528037 env[1324]: 2025-07-12 00:37:10.474 [INFO][4579] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" Jul 12 00:37:10.528037 env[1324]: 2025-07-12 00:37:10.474 [INFO][4579] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" Jul 12 00:37:10.528037 env[1324]: 2025-07-12 00:37:10.501 [INFO][4611] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" HandleID="k8s-pod-network.7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" Workload="localhost-k8s-csi--node--driver--79v58-eth0" Jul 12 00:37:10.528037 env[1324]: 2025-07-12 00:37:10.501 [INFO][4611] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:37:10.528037 env[1324]: 2025-07-12 00:37:10.501 [INFO][4611] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:37:10.528037 env[1324]: 2025-07-12 00:37:10.517 [WARNING][4611] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" HandleID="k8s-pod-network.7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" Workload="localhost-k8s-csi--node--driver--79v58-eth0" Jul 12 00:37:10.528037 env[1324]: 2025-07-12 00:37:10.517 [INFO][4611] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" HandleID="k8s-pod-network.7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" Workload="localhost-k8s-csi--node--driver--79v58-eth0" Jul 12 00:37:10.528037 env[1324]: 2025-07-12 00:37:10.519 [INFO][4611] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:37:10.528037 env[1324]: 2025-07-12 00:37:10.526 [INFO][4579] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" Jul 12 00:37:10.533348 env[1324]: time="2025-07-12T00:37:10.533299632Z" level=info msg="TearDown network for sandbox \"7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5\" successfully" Jul 12 00:37:10.533495 env[1324]: time="2025-07-12T00:37:10.533474926Z" level=info msg="StopPodSandbox for \"7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5\" returns successfully" Jul 12 00:37:10.534274 env[1324]: time="2025-07-12T00:37:10.534233507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-79v58,Uid:6f21bec0-521a-455c-b964-ef73ea0151cf,Namespace:calico-system,Attempt:1,}" Jul 12 00:37:10.540342 systemd[1]: run-netns-cni\x2dd62fc471\x2d19a9\x2da808\x2dd2c9\x2de8e1d6196dce.mount: Deactivated successfully. Jul 12 00:37:10.540506 systemd[1]: run-netns-cni\x2d5c9cf655\x2dae60\x2d123e\x2ddea9\x2daf0d44872c62.mount: Deactivated successfully. Jul 12 00:37:10.577296 kubelet[2103]: E0712 00:37:10.577267 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:37:10.577693 kubelet[2103]: E0712 00:37:10.577513 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:37:10.655000 audit[4676]: NETFILTER_CFG table=filter:116 family=2 entries=12 op=nft_register_rule pid=4676 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:10.655000 audit[4676]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffcee7f5f0 a2=0 a3=1 items=0 ppid=2213 pid=4676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:10.655000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:10.669000 audit[4676]: NETFILTER_CFG table=nat:117 family=2 entries=58 op=nft_register_chain pid=4676 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:10.669000 audit[4676]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20628 a0=3 a1=ffffcee7f5f0 a2=0 a3=1 items=0 ppid=2213 pid=4676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:10.669000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:10.683171 env[1324]: time="2025-07-12T00:37:10.683117851Z" level=info msg="StartContainer for \"91cace5e737b247a4afc2cde5e5915e43dcdd8d8e553229a17eada8168cc8855\" returns successfully" Jul 12 00:37:10.723951 systemd-networkd[1098]: calie003591177a: Link UP Jul 12 00:37:10.726712 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 12 00:37:10.726827 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie003591177a: link becomes ready Jul 12 00:37:10.727127 systemd-networkd[1098]: calie003591177a: Gained carrier Jul 12 00:37:10.741963 env[1324]: 2025-07-12 00:37:10.590 [INFO][4630] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--6mdg7-eth0 goldmane-58fd7646b9- calico-system 118681c8-63c7-4aed-ae42-07c9da34ea65 1030 0 2025-07-12 00:36:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-6mdg7 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie003591177a [] [] }} ContainerID="9b4fd99b36ce2077b24854f58397b8171a3d5754355d0ab316e3db2e36b90329" Namespace="calico-system" Pod="goldmane-58fd7646b9-6mdg7" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--6mdg7-" Jul 12 00:37:10.741963 env[1324]: 2025-07-12 00:37:10.590 [INFO][4630] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b4fd99b36ce2077b24854f58397b8171a3d5754355d0ab316e3db2e36b90329" Namespace="calico-system" Pod="goldmane-58fd7646b9-6mdg7" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--6mdg7-eth0" Jul 12 00:37:10.741963 env[1324]: 2025-07-12 00:37:10.660 [INFO][4665] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b4fd99b36ce2077b24854f58397b8171a3d5754355d0ab316e3db2e36b90329" HandleID="k8s-pod-network.9b4fd99b36ce2077b24854f58397b8171a3d5754355d0ab316e3db2e36b90329" Workload="localhost-k8s-goldmane--58fd7646b9--6mdg7-eth0" Jul 12 00:37:10.741963 env[1324]: 2025-07-12 00:37:10.660 [INFO][4665] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9b4fd99b36ce2077b24854f58397b8171a3d5754355d0ab316e3db2e36b90329" HandleID="k8s-pod-network.9b4fd99b36ce2077b24854f58397b8171a3d5754355d0ab316e3db2e36b90329" Workload="localhost-k8s-goldmane--58fd7646b9--6mdg7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-6mdg7", "timestamp":"2025-07-12 00:37:10.660569264 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:37:10.741963 env[1324]: 2025-07-12 00:37:10.660 [INFO][4665] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:37:10.741963 env[1324]: 2025-07-12 00:37:10.660 [INFO][4665] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:37:10.741963 env[1324]: 2025-07-12 00:37:10.660 [INFO][4665] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:37:10.741963 env[1324]: 2025-07-12 00:37:10.680 [INFO][4665] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9b4fd99b36ce2077b24854f58397b8171a3d5754355d0ab316e3db2e36b90329" host="localhost" Jul 12 00:37:10.741963 env[1324]: 2025-07-12 00:37:10.687 [INFO][4665] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:37:10.741963 env[1324]: 2025-07-12 00:37:10.692 [INFO][4665] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:37:10.741963 env[1324]: 2025-07-12 00:37:10.693 [INFO][4665] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:37:10.741963 env[1324]: 2025-07-12 00:37:10.696 [INFO][4665] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:37:10.741963 env[1324]: 2025-07-12 00:37:10.696 [INFO][4665] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9b4fd99b36ce2077b24854f58397b8171a3d5754355d0ab316e3db2e36b90329" host="localhost" Jul 12 00:37:10.741963 env[1324]: 2025-07-12 00:37:10.697 [INFO][4665] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9b4fd99b36ce2077b24854f58397b8171a3d5754355d0ab316e3db2e36b90329 Jul 12 00:37:10.741963 env[1324]: 2025-07-12 00:37:10.702 [INFO][4665] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9b4fd99b36ce2077b24854f58397b8171a3d5754355d0ab316e3db2e36b90329" host="localhost" Jul 12 00:37:10.741963 env[1324]: 2025-07-12 00:37:10.713 [INFO][4665] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.9b4fd99b36ce2077b24854f58397b8171a3d5754355d0ab316e3db2e36b90329" host="localhost" Jul 12 00:37:10.741963 env[1324]: 2025-07-12 00:37:10.713 [INFO][4665] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.9b4fd99b36ce2077b24854f58397b8171a3d5754355d0ab316e3db2e36b90329" host="localhost" Jul 12 00:37:10.741963 env[1324]: 2025-07-12 00:37:10.713 [INFO][4665] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:37:10.741963 env[1324]: 2025-07-12 00:37:10.713 [INFO][4665] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="9b4fd99b36ce2077b24854f58397b8171a3d5754355d0ab316e3db2e36b90329" HandleID="k8s-pod-network.9b4fd99b36ce2077b24854f58397b8171a3d5754355d0ab316e3db2e36b90329" Workload="localhost-k8s-goldmane--58fd7646b9--6mdg7-eth0" Jul 12 00:37:10.742592 env[1324]: 2025-07-12 00:37:10.721 [INFO][4630] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b4fd99b36ce2077b24854f58397b8171a3d5754355d0ab316e3db2e36b90329" Namespace="calico-system" Pod="goldmane-58fd7646b9-6mdg7" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--6mdg7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--6mdg7-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"118681c8-63c7-4aed-ae42-07c9da34ea65", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 36, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-6mdg7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie003591177a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:37:10.742592 env[1324]: 2025-07-12 00:37:10.721 [INFO][4630] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="9b4fd99b36ce2077b24854f58397b8171a3d5754355d0ab316e3db2e36b90329" Namespace="calico-system" Pod="goldmane-58fd7646b9-6mdg7" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--6mdg7-eth0" Jul 12 00:37:10.742592 env[1324]: 2025-07-12 00:37:10.721 [INFO][4630] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie003591177a ContainerID="9b4fd99b36ce2077b24854f58397b8171a3d5754355d0ab316e3db2e36b90329" Namespace="calico-system" Pod="goldmane-58fd7646b9-6mdg7" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--6mdg7-eth0" Jul 12 00:37:10.742592 env[1324]: 2025-07-12 00:37:10.729 [INFO][4630] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b4fd99b36ce2077b24854f58397b8171a3d5754355d0ab316e3db2e36b90329" Namespace="calico-system" Pod="goldmane-58fd7646b9-6mdg7" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--6mdg7-eth0" Jul 12 00:37:10.742592 env[1324]: 2025-07-12 00:37:10.729 [INFO][4630] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b4fd99b36ce2077b24854f58397b8171a3d5754355d0ab316e3db2e36b90329" Namespace="calico-system" Pod="goldmane-58fd7646b9-6mdg7" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--6mdg7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--6mdg7-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"118681c8-63c7-4aed-ae42-07c9da34ea65", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 36, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9b4fd99b36ce2077b24854f58397b8171a3d5754355d0ab316e3db2e36b90329", Pod:"goldmane-58fd7646b9-6mdg7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie003591177a", MAC:"4a:93:68:13:77:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:37:10.742592 env[1324]: 2025-07-12 00:37:10.737 [INFO][4630] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b4fd99b36ce2077b24854f58397b8171a3d5754355d0ab316e3db2e36b90329" Namespace="calico-system" Pod="goldmane-58fd7646b9-6mdg7" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--6mdg7-eth0" Jul 12 00:37:10.760474 env[1324]: time="2025-07-12T00:37:10.760369391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:37:10.760790 env[1324]: time="2025-07-12T00:37:10.760428156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:37:10.760790 env[1324]: time="2025-07-12T00:37:10.760465119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:37:10.760790 env[1324]: time="2025-07-12T00:37:10.760687537Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9b4fd99b36ce2077b24854f58397b8171a3d5754355d0ab316e3db2e36b90329 pid=4722 runtime=io.containerd.runc.v2 Jul 12 00:37:10.756000 audit[4720]: NETFILTER_CFG table=filter:118 family=2 entries=60 op=nft_register_chain pid=4720 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 12 00:37:10.756000 audit[4720]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29916 a0=3 a1=ffffe45cfe40 a2=0 a3=ffffbac62fa8 items=0 ppid=3790 pid=4720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:10.756000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 12 00:37:10.832859 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia143b048e89: link becomes ready Jul 12 00:37:10.831137 systemd-networkd[1098]: calia143b048e89: Link UP Jul 12 00:37:10.831279 systemd-networkd[1098]: calia143b048e89: Gained carrier Jul 12 00:37:10.833672 systemd-resolved[1238]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:37:10.848335 env[1324]: 2025-07-12 00:37:10.612 [INFO][4642] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--79v58-eth0 csi-node-driver- calico-system 6f21bec0-521a-455c-b964-ef73ea0151cf 1031 0 2025-07-12 00:36:47 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-79v58 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia143b048e89 [] [] }} ContainerID="2bc897b53a8bf5cc90ed7204dd4f4427e70fff3dadb86461be70a890ca32b366" Namespace="calico-system" Pod="csi-node-driver-79v58" WorkloadEndpoint="localhost-k8s-csi--node--driver--79v58-" Jul 12 00:37:10.848335 env[1324]: 2025-07-12 00:37:10.613 [INFO][4642] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2bc897b53a8bf5cc90ed7204dd4f4427e70fff3dadb86461be70a890ca32b366" Namespace="calico-system" Pod="csi-node-driver-79v58" WorkloadEndpoint="localhost-k8s-csi--node--driver--79v58-eth0" Jul 12 00:37:10.848335 env[1324]: 2025-07-12 00:37:10.679 [INFO][4671] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2bc897b53a8bf5cc90ed7204dd4f4427e70fff3dadb86461be70a890ca32b366" HandleID="k8s-pod-network.2bc897b53a8bf5cc90ed7204dd4f4427e70fff3dadb86461be70a890ca32b366" Workload="localhost-k8s-csi--node--driver--79v58-eth0" Jul 12 00:37:10.848335 env[1324]: 2025-07-12 00:37:10.679 [INFO][4671] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2bc897b53a8bf5cc90ed7204dd4f4427e70fff3dadb86461be70a890ca32b366" HandleID="k8s-pod-network.2bc897b53a8bf5cc90ed7204dd4f4427e70fff3dadb86461be70a890ca32b366" Workload="localhost-k8s-csi--node--driver--79v58-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3260), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-79v58", "timestamp":"2025-07-12 00:37:10.679018319 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:37:10.848335 env[1324]: 2025-07-12 00:37:10.679 [INFO][4671] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:37:10.848335 env[1324]: 2025-07-12 00:37:10.714 [INFO][4671] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:37:10.848335 env[1324]: 2025-07-12 00:37:10.714 [INFO][4671] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 00:37:10.848335 env[1324]: 2025-07-12 00:37:10.781 [INFO][4671] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2bc897b53a8bf5cc90ed7204dd4f4427e70fff3dadb86461be70a890ca32b366" host="localhost" Jul 12 00:37:10.848335 env[1324]: 2025-07-12 00:37:10.791 [INFO][4671] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 00:37:10.848335 env[1324]: 2025-07-12 00:37:10.799 [INFO][4671] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 00:37:10.848335 env[1324]: 2025-07-12 00:37:10.802 [INFO][4671] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 00:37:10.848335 env[1324]: 2025-07-12 00:37:10.805 [INFO][4671] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 00:37:10.848335 env[1324]: 2025-07-12 00:37:10.805 [INFO][4671] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2bc897b53a8bf5cc90ed7204dd4f4427e70fff3dadb86461be70a890ca32b366" host="localhost" Jul 12 00:37:10.848335 env[1324]: 2025-07-12 00:37:10.807 [INFO][4671] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2bc897b53a8bf5cc90ed7204dd4f4427e70fff3dadb86461be70a890ca32b366 Jul 12 00:37:10.848335 env[1324]: 2025-07-12 00:37:10.815 [INFO][4671] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2bc897b53a8bf5cc90ed7204dd4f4427e70fff3dadb86461be70a890ca32b366" host="localhost" Jul 12 00:37:10.848335 env[1324]: 2025-07-12 00:37:10.822 [INFO][4671] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.2bc897b53a8bf5cc90ed7204dd4f4427e70fff3dadb86461be70a890ca32b366" host="localhost" Jul 12 00:37:10.848335 env[1324]: 2025-07-12 00:37:10.822 [INFO][4671] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.2bc897b53a8bf5cc90ed7204dd4f4427e70fff3dadb86461be70a890ca32b366" host="localhost" Jul 12 00:37:10.848335 env[1324]: 2025-07-12 00:37:10.822 [INFO][4671] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:37:10.848335 env[1324]: 2025-07-12 00:37:10.822 [INFO][4671] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="2bc897b53a8bf5cc90ed7204dd4f4427e70fff3dadb86461be70a890ca32b366" HandleID="k8s-pod-network.2bc897b53a8bf5cc90ed7204dd4f4427e70fff3dadb86461be70a890ca32b366" Workload="localhost-k8s-csi--node--driver--79v58-eth0" Jul 12 00:37:10.848939 env[1324]: 2025-07-12 00:37:10.828 [INFO][4642] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2bc897b53a8bf5cc90ed7204dd4f4427e70fff3dadb86461be70a890ca32b366" Namespace="calico-system" Pod="csi-node-driver-79v58" WorkloadEndpoint="localhost-k8s-csi--node--driver--79v58-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--79v58-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6f21bec0-521a-455c-b964-ef73ea0151cf", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 36, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-79v58", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia143b048e89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:37:10.848939 env[1324]: 2025-07-12 00:37:10.828 [INFO][4642] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="2bc897b53a8bf5cc90ed7204dd4f4427e70fff3dadb86461be70a890ca32b366" Namespace="calico-system" Pod="csi-node-driver-79v58" WorkloadEndpoint="localhost-k8s-csi--node--driver--79v58-eth0" Jul 12 00:37:10.848939 env[1324]: 2025-07-12 00:37:10.828 [INFO][4642] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia143b048e89 ContainerID="2bc897b53a8bf5cc90ed7204dd4f4427e70fff3dadb86461be70a890ca32b366" Namespace="calico-system" Pod="csi-node-driver-79v58" WorkloadEndpoint="localhost-k8s-csi--node--driver--79v58-eth0" Jul 12 00:37:10.848939 env[1324]: 2025-07-12 00:37:10.830 [INFO][4642] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2bc897b53a8bf5cc90ed7204dd4f4427e70fff3dadb86461be70a890ca32b366" Namespace="calico-system" Pod="csi-node-driver-79v58" WorkloadEndpoint="localhost-k8s-csi--node--driver--79v58-eth0" Jul 12 00:37:10.848939 env[1324]: 2025-07-12 00:37:10.831 [INFO][4642] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2bc897b53a8bf5cc90ed7204dd4f4427e70fff3dadb86461be70a890ca32b366" Namespace="calico-system" Pod="csi-node-driver-79v58" WorkloadEndpoint="localhost-k8s-csi--node--driver--79v58-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--79v58-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6f21bec0-521a-455c-b964-ef73ea0151cf", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 36, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2bc897b53a8bf5cc90ed7204dd4f4427e70fff3dadb86461be70a890ca32b366", Pod:"csi-node-driver-79v58", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia143b048e89", MAC:"ee:a8:95:11:ec:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:37:10.848939 env[1324]: 2025-07-12 00:37:10.841 [INFO][4642] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2bc897b53a8bf5cc90ed7204dd4f4427e70fff3dadb86461be70a890ca32b366" Namespace="calico-system" Pod="csi-node-driver-79v58" WorkloadEndpoint="localhost-k8s-csi--node--driver--79v58-eth0" Jul 12 00:37:10.857000 audit[4761]: NETFILTER_CFG table=filter:119 family=2 entries=62 op=nft_register_chain pid=4761 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 12 00:37:10.857000 audit[4761]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28352 a0=3 a1=ffffe5338c60 a2=0 a3=ffff9029efa8 items=0 ppid=3790 pid=4761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:10.857000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 12 00:37:10.872520 env[1324]: time="2025-07-12T00:37:10.872477235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-6mdg7,Uid:118681c8-63c7-4aed-ae42-07c9da34ea65,Namespace:calico-system,Attempt:1,} returns sandbox id \"9b4fd99b36ce2077b24854f58397b8171a3d5754355d0ab316e3db2e36b90329\"" Jul 12 00:37:10.877393 env[1324]: time="2025-07-12T00:37:10.877057446Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:37:10.877393 env[1324]: time="2025-07-12T00:37:10.877139533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:37:10.877393 env[1324]: time="2025-07-12T00:37:10.877150094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:37:10.877738 env[1324]: time="2025-07-12T00:37:10.877691537Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2bc897b53a8bf5cc90ed7204dd4f4427e70fff3dadb86461be70a890ca32b366 pid=4775 runtime=io.containerd.runc.v2 Jul 12 00:37:10.912617 systemd-resolved[1238]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:37:10.924282 env[1324]: time="2025-07-12T00:37:10.924244190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-79v58,Uid:6f21bec0-521a-455c-b964-ef73ea0151cf,Namespace:calico-system,Attempt:1,} returns sandbox id \"2bc897b53a8bf5cc90ed7204dd4f4427e70fff3dadb86461be70a890ca32b366\"" Jul 12 00:37:11.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.111:22-10.0.0.1:39272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:11.373756 systemd[1]: Started sshd@8-10.0.0.111:22-10.0.0.1:39272.service. Jul 12 00:37:11.423000 audit[4811]: USER_ACCT pid=4811 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:11.424586 sshd[4811]: Accepted publickey for core from 10.0.0.1 port 39272 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:37:11.425000 audit[4811]: CRED_ACQ pid=4811 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:11.425000 audit[4811]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd14e9c80 a2=3 a3=1 items=0 ppid=1 pid=4811 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:11.425000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:37:11.426435 sshd[4811]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:37:11.430450 systemd-logind[1309]: New session 9 of user core. Jul 12 00:37:11.430984 systemd[1]: Started session-9.scope. Jul 12 00:37:11.434000 audit[4811]: USER_START pid=4811 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:11.436000 audit[4814]: CRED_ACQ pid=4814 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:11.538309 systemd[1]: run-containerd-runc-k8s.io-9b4fd99b36ce2077b24854f58397b8171a3d5754355d0ab316e3db2e36b90329-runc.1NtJml.mount: Deactivated successfully. Jul 12 00:37:11.588766 kubelet[2103]: E0712 00:37:11.588721 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:37:11.594018 kubelet[2103]: I0712 00:37:11.593950 2103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7f46b5b9d6-92dnp" podStartSLOduration=22.161314881 podStartE2EDuration="24.593933808s" podCreationTimestamp="2025-07-12 00:36:47 +0000 UTC" firstStartedPulling="2025-07-12 00:37:08.006911307 +0000 UTC m=+42.682118205" lastFinishedPulling="2025-07-12 00:37:10.439530234 +0000 UTC m=+45.114737132" observedRunningTime="2025-07-12 00:37:11.593370924 +0000 UTC m=+46.268577822" watchObservedRunningTime="2025-07-12 00:37:11.593933808 +0000 UTC m=+46.269140706" Jul 12 00:37:11.602886 systemd-networkd[1098]: cali7d6c70dc994: Gained IPv6LL Jul 12 00:37:11.633272 sshd[4811]: pam_unix(sshd:session): session closed for user core Jul 12 00:37:11.634000 audit[4811]: USER_END pid=4811 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:11.634000 audit[4811]: CRED_DISP pid=4811 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:11.636877 systemd[1]: sshd@8-10.0.0.111:22-10.0.0.1:39272.service: Deactivated successfully. Jul 12 00:37:11.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.111:22-10.0.0.1:39272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:11.638444 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 00:37:11.638445 systemd-logind[1309]: Session 9 logged out. Waiting for processes to exit. Jul 12 00:37:11.639997 systemd-logind[1309]: Removed session 9. Jul 12 00:37:12.371274 systemd-networkd[1098]: calie003591177a: Gained IPv6LL Jul 12 00:37:12.371550 systemd-networkd[1098]: calia143b048e89: Gained IPv6LL Jul 12 00:37:12.590265 kubelet[2103]: I0712 00:37:12.590218 2103 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:37:12.590784 kubelet[2103]: E0712 00:37:12.590768 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:37:12.614821 env[1324]: time="2025-07-12T00:37:12.614773439Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:12.616855 env[1324]: time="2025-07-12T00:37:12.616810716Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:12.618886 env[1324]: time="2025-07-12T00:37:12.618845514Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:12.620848 env[1324]: time="2025-07-12T00:37:12.620811146Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:12.622187 env[1324]: time="2025-07-12T00:37:12.621558044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 12 00:37:12.623395 env[1324]: time="2025-07-12T00:37:12.623335142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 00:37:12.624634 env[1324]: time="2025-07-12T00:37:12.624602000Z" level=info msg="CreateContainer within sandbox \"d42fec67b342518352f397bb63d3e583ac00357e203617d6bdaef25410339430\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:37:12.634527 env[1324]: time="2025-07-12T00:37:12.634480285Z" level=info msg="CreateContainer within sandbox \"d42fec67b342518352f397bb63d3e583ac00357e203617d6bdaef25410339430\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"66e87f6112f2ae5a4e8f68f6d685acf5acadeb5799738dea9c5549b083e76534\"" Jul 12 00:37:12.636840 env[1324]: time="2025-07-12T00:37:12.635687219Z" level=info msg="StartContainer for \"66e87f6112f2ae5a4e8f68f6d685acf5acadeb5799738dea9c5549b083e76534\"" Jul 12 00:37:12.747664 env[1324]: time="2025-07-12T00:37:12.747612851Z" level=info msg="StartContainer for \"66e87f6112f2ae5a4e8f68f6d685acf5acadeb5799738dea9c5549b083e76534\" returns successfully" Jul 12 00:37:12.851512 env[1324]: time="2025-07-12T00:37:12.851452496Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:12.853196 env[1324]: time="2025-07-12T00:37:12.853157468Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:12.855067 env[1324]: time="2025-07-12T00:37:12.855020772Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:12.856349 env[1324]: time="2025-07-12T00:37:12.856322713Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:12.856843 env[1324]: time="2025-07-12T00:37:12.856812631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 12 00:37:12.865941 env[1324]: time="2025-07-12T00:37:12.865896055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 12 00:37:12.868398 env[1324]: time="2025-07-12T00:37:12.868354005Z" level=info msg="CreateContainer within sandbox \"dc2bb6f1488e180aeb18054610bb6436630079b8cf6e25740c7d44b9fcd201d7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:37:12.882210 env[1324]: time="2025-07-12T00:37:12.882107351Z" level=info msg="CreateContainer within sandbox \"dc2bb6f1488e180aeb18054610bb6436630079b8cf6e25740c7d44b9fcd201d7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"aa128b4c443bdc6998ffc22500e5660267e84913e676734e9b2289ba0a175a7f\"" Jul 12 00:37:12.882891 env[1324]: time="2025-07-12T00:37:12.882775923Z" level=info msg="StartContainer for \"aa128b4c443bdc6998ffc22500e5660267e84913e676734e9b2289ba0a175a7f\"" Jul 12 00:37:12.999126 env[1324]: time="2025-07-12T00:37:12.998738467Z" level=info msg="StartContainer for \"aa128b4c443bdc6998ffc22500e5660267e84913e676734e9b2289ba0a175a7f\" returns successfully" Jul 12 00:37:13.607652 kubelet[2103]: I0712 00:37:13.607578 2103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d9dcbc845-hgq6q" podStartSLOduration=27.107372868 podStartE2EDuration="31.607559458s" podCreationTimestamp="2025-07-12 00:36:42 +0000 UTC" firstStartedPulling="2025-07-12 00:37:08.122762722 +0000 UTC m=+42.797969620" lastFinishedPulling="2025-07-12 00:37:12.622949312 +0000 UTC m=+47.298156210" observedRunningTime="2025-07-12 00:37:13.607461331 +0000 UTC m=+48.282668269" watchObservedRunningTime="2025-07-12 00:37:13.607559458 +0000 UTC m=+48.282766356" Jul 12 00:37:13.637625 kernel: kauditd_printk_skb: 570 callbacks suppressed Jul 12 00:37:13.637794 kernel: audit: type=1325 audit(1752280633.629:429): table=filter:120 family=2 entries=12 op=nft_register_rule pid=4953 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:13.637829 kernel: audit: type=1300 audit(1752280633.629:429): arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffff01a970 a2=0 a3=1 items=0 ppid=2213 pid=4953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:13.629000 audit[4953]: NETFILTER_CFG table=filter:120 family=2 entries=12 op=nft_register_rule pid=4953 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:13.629000 audit[4953]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffff01a970 a2=0 a3=1 items=0 ppid=2213 pid=4953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:13.629000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:13.640313 kernel: audit: type=1327 audit(1752280633.629:429): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:13.641000 audit[4953]: NETFILTER_CFG table=nat:121 family=2 entries=22 op=nft_register_rule pid=4953 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:13.641000 audit[4953]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffff01a970 a2=0 a3=1 items=0 ppid=2213 pid=4953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:13.648558 kernel: audit: type=1325 audit(1752280633.641:430): table=nat:121 family=2 entries=22 op=nft_register_rule pid=4953 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:13.648672 kernel: audit: type=1300 audit(1752280633.641:430): arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffff01a970 a2=0 a3=1 items=0 ppid=2213 pid=4953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:13.641000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:13.651121 kernel: audit: type=1327 audit(1752280633.641:430): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:13.664000 audit[4955]: NETFILTER_CFG table=filter:122 family=2 entries=12 op=nft_register_rule pid=4955 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:13.664000 audit[4955]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffd2054d30 a2=0 a3=1 items=0 ppid=2213 pid=4955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:13.671247 kernel: audit: type=1325 audit(1752280633.664:431): table=filter:122 family=2 entries=12 op=nft_register_rule pid=4955 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:13.671395 kernel: audit: type=1300 audit(1752280633.664:431): arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffd2054d30 a2=0 a3=1 items=0 ppid=2213 pid=4955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:13.664000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:13.674404 kernel: audit: type=1327 audit(1752280633.664:431): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:13.675000 audit[4955]: NETFILTER_CFG table=nat:123 family=2 entries=22 op=nft_register_rule pid=4955 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:13.675000 audit[4955]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffd2054d30 a2=0 a3=1 items=0 ppid=2213 pid=4955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:13.678454 kernel: audit: type=1325 audit(1752280633.675:432): table=nat:123 family=2 entries=22 op=nft_register_rule pid=4955 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:13.675000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:14.597701 kubelet[2103]: I0712 00:37:14.597658 2103 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:37:14.598320 kubelet[2103]: I0712 00:37:14.598290 2103 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:37:14.806585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1835956503.mount: Deactivated successfully. Jul 12 00:37:15.458175 env[1324]: time="2025-07-12T00:37:15.458129763Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:15.459956 env[1324]: time="2025-07-12T00:37:15.459918494Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:15.461667 env[1324]: time="2025-07-12T00:37:15.461631099Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:15.464348 env[1324]: time="2025-07-12T00:37:15.464315975Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:15.466104 env[1324]: time="2025-07-12T00:37:15.466075863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 12 00:37:15.467401 env[1324]: time="2025-07-12T00:37:15.467353556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 12 00:37:15.467776 env[1324]: time="2025-07-12T00:37:15.467749065Z" level=info msg="CreateContainer within sandbox \"9b4fd99b36ce2077b24854f58397b8171a3d5754355d0ab316e3db2e36b90329\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 12 00:37:15.479673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount747987020.mount: Deactivated successfully. Jul 12 00:37:15.481593 env[1324]: time="2025-07-12T00:37:15.481545111Z" level=info msg="CreateContainer within sandbox \"9b4fd99b36ce2077b24854f58397b8171a3d5754355d0ab316e3db2e36b90329\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"d166d55751e78e1e0db2e0d93257cf0b945cf55ee557b4ddec2a01ddc912e6af\"" Jul 12 00:37:15.482042 env[1324]: time="2025-07-12T00:37:15.482016666Z" level=info msg="StartContainer for \"d166d55751e78e1e0db2e0d93257cf0b945cf55ee557b4ddec2a01ddc912e6af\"" Jul 12 00:37:15.540714 env[1324]: time="2025-07-12T00:37:15.540668264Z" level=info msg="StartContainer for \"d166d55751e78e1e0db2e0d93257cf0b945cf55ee557b4ddec2a01ddc912e6af\" returns successfully" Jul 12 00:37:15.616701 kubelet[2103]: I0712 00:37:15.616639 2103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d9dcbc845-6nfvq" podStartSLOduration=30.631516381 podStartE2EDuration="33.616621724s" podCreationTimestamp="2025-07-12 00:36:42 +0000 UTC" firstStartedPulling="2025-07-12 00:37:09.878649786 +0000 UTC m=+44.553856644" lastFinishedPulling="2025-07-12 00:37:12.863755009 +0000 UTC m=+47.538961987" observedRunningTime="2025-07-12 00:37:13.620127132 +0000 UTC m=+48.295334070" watchObservedRunningTime="2025-07-12 00:37:15.616621724 +0000 UTC m=+50.291828622" Jul 12 00:37:15.617098 kubelet[2103]: I0712 00:37:15.617069 2103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-6mdg7" podStartSLOduration=25.028633633 podStartE2EDuration="29.617060516s" podCreationTimestamp="2025-07-12 00:36:46 +0000 UTC" firstStartedPulling="2025-07-12 00:37:10.878355231 +0000 UTC m=+45.553562129" lastFinishedPulling="2025-07-12 00:37:15.466782154 +0000 UTC m=+50.141989012" observedRunningTime="2025-07-12 00:37:15.616804457 +0000 UTC m=+50.292011355" watchObservedRunningTime="2025-07-12 00:37:15.617060516 +0000 UTC m=+50.292267374" Jul 12 00:37:15.626000 audit[4995]: NETFILTER_CFG table=filter:124 family=2 entries=12 op=nft_register_rule pid=4995 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:15.626000 audit[4995]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=fffff0735490 a2=0 a3=1 items=0 ppid=2213 pid=4995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:15.626000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:15.635000 audit[4995]: NETFILTER_CFG table=nat:125 family=2 entries=22 op=nft_register_rule pid=4995 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:15.635000 audit[4995]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=fffff0735490 a2=0 a3=1 items=0 ppid=2213 pid=4995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:15.635000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:15.806827 systemd[1]: run-containerd-runc-k8s.io-d166d55751e78e1e0db2e0d93257cf0b945cf55ee557b4ddec2a01ddc912e6af-runc.nuFdwv.mount: Deactivated successfully. Jul 12 00:37:16.608451 kubelet[2103]: I0712 00:37:16.608420 2103 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:37:16.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.111:22-10.0.0.1:51848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:16.637196 systemd[1]: Started sshd@9-10.0.0.111:22-10.0.0.1:51848.service. Jul 12 00:37:16.670431 env[1324]: time="2025-07-12T00:37:16.670377615Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:16.672262 env[1324]: time="2025-07-12T00:37:16.672225467Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:16.675444 env[1324]: time="2025-07-12T00:37:16.675418416Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:16.676813 env[1324]: time="2025-07-12T00:37:16.676774313Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:16.677466 env[1324]: time="2025-07-12T00:37:16.677431800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 12 00:37:16.687202 env[1324]: time="2025-07-12T00:37:16.687149296Z" level=info msg="CreateContainer within sandbox \"2bc897b53a8bf5cc90ed7204dd4f4427e70fff3dadb86461be70a890ca32b366\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 12 00:37:16.702459 env[1324]: time="2025-07-12T00:37:16.702417629Z" level=info msg="CreateContainer within sandbox \"2bc897b53a8bf5cc90ed7204dd4f4427e70fff3dadb86461be70a890ca32b366\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b78b16ff3ec8cbf10746f660dfe0cb037107a999f933fe35948f830b2194aa64\"" Jul 12 00:37:16.703109 env[1324]: time="2025-07-12T00:37:16.703084197Z" level=info msg="StartContainer for \"b78b16ff3ec8cbf10746f660dfe0cb037107a999f933fe35948f830b2194aa64\"" Jul 12 00:37:16.723000 audit[4998]: USER_ACCT pid=4998 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:16.724000 audit[4998]: CRED_ACQ pid=4998 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:16.724000 audit[4998]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcffb4fd0 a2=3 a3=1 items=0 ppid=1 pid=4998 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:16.724000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:37:16.726781 sshd[4998]: Accepted publickey for core from 10.0.0.1 port 51848 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:37:16.726645 sshd[4998]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:37:16.734830 systemd[1]: Started session-10.scope. Jul 12 00:37:16.735060 systemd-logind[1309]: New session 10 of user core. Jul 12 00:37:16.738000 audit[4998]: USER_START pid=4998 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:16.740000 audit[5025]: CRED_ACQ pid=5025 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:16.760252 env[1324]: time="2025-07-12T00:37:16.760214128Z" level=info msg="StartContainer for \"b78b16ff3ec8cbf10746f660dfe0cb037107a999f933fe35948f830b2194aa64\" returns successfully" Jul 12 00:37:16.761345 env[1324]: time="2025-07-12T00:37:16.761311087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 12 00:37:17.019896 sshd[4998]: pam_unix(sshd:session): session closed for user core Jul 12 00:37:17.021590 systemd[1]: Started sshd@10-10.0.0.111:22-10.0.0.1:51864.service. Jul 12 00:37:17.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.111:22-10.0.0.1:51864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:17.021000 audit[4998]: USER_END pid=4998 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:17.021000 audit[4998]: CRED_DISP pid=4998 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:17.026156 systemd[1]: sshd@9-10.0.0.111:22-10.0.0.1:51848.service: Deactivated successfully. Jul 12 00:37:17.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.111:22-10.0.0.1:51848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:17.028097 systemd-logind[1309]: Session 10 logged out. Waiting for processes to exit. Jul 12 00:37:17.028197 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 00:37:17.029132 systemd-logind[1309]: Removed session 10. Jul 12 00:37:17.067246 sshd[5046]: Accepted publickey for core from 10.0.0.1 port 51864 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:37:17.065000 audit[5046]: USER_ACCT pid=5046 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:17.067000 audit[5046]: CRED_ACQ pid=5046 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:17.067000 audit[5046]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdd0d5a40 a2=3 a3=1 items=0 ppid=1 pid=5046 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:17.067000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:37:17.069154 sshd[5046]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:37:17.074414 systemd-logind[1309]: New session 11 of user core. Jul 12 00:37:17.074698 systemd[1]: Started session-11.scope. Jul 12 00:37:17.081000 audit[5046]: USER_START pid=5046 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:17.083000 audit[5051]: CRED_ACQ pid=5051 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:17.234137 sshd[5046]: pam_unix(sshd:session): session closed for user core Jul 12 00:37:17.234000 audit[5046]: USER_END pid=5046 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:17.235000 audit[5046]: CRED_DISP pid=5046 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:17.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.111:22-10.0.0.1:51878 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:17.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.111:22-10.0.0.1:51864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:17.239084 systemd[1]: Started sshd@11-10.0.0.111:22-10.0.0.1:51878.service. Jul 12 00:37:17.239596 systemd[1]: sshd@10-10.0.0.111:22-10.0.0.1:51864.service: Deactivated successfully. Jul 12 00:37:17.240410 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 00:37:17.243535 systemd-logind[1309]: Session 11 logged out. Waiting for processes to exit. Jul 12 00:37:17.245372 systemd-logind[1309]: Removed session 11. Jul 12 00:37:17.299000 audit[5060]: USER_ACCT pid=5060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:17.300000 audit[5060]: CRED_ACQ pid=5060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:17.300000 audit[5060]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffef7272f0 a2=3 a3=1 items=0 ppid=1 pid=5060 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:17.300000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:37:17.302363 sshd[5060]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:37:17.303792 sshd[5060]: Accepted publickey for core from 10.0.0.1 port 51878 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:37:17.308655 systemd-logind[1309]: New session 12 of user core. Jul 12 00:37:17.308659 systemd[1]: Started session-12.scope. Jul 12 00:37:17.312000 audit[5060]: USER_START pid=5060 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:17.313000 audit[5064]: CRED_ACQ pid=5064 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:17.441508 sshd[5060]: pam_unix(sshd:session): session closed for user core Jul 12 00:37:17.441000 audit[5060]: USER_END pid=5060 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:17.441000 audit[5060]: CRED_DISP pid=5060 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:17.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.111:22-10.0.0.1:51878 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:17.444673 systemd-logind[1309]: Session 12 logged out. Waiting for processes to exit. Jul 12 00:37:17.444869 systemd[1]: sshd@11-10.0.0.111:22-10.0.0.1:51878.service: Deactivated successfully. Jul 12 00:37:17.446069 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 00:37:17.446493 systemd-logind[1309]: Removed session 12. Jul 12 00:37:17.917233 kubelet[2103]: I0712 00:37:17.917195 2103 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:37:18.060000 audit[5112]: NETFILTER_CFG table=filter:126 family=2 entries=11 op=nft_register_rule pid=5112 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:18.060000 audit[5112]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffd766c7d0 a2=0 a3=1 items=0 ppid=2213 pid=5112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:18.060000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:18.070000 audit[5112]: NETFILTER_CFG table=nat:127 family=2 entries=29 op=nft_register_chain pid=5112 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:18.070000 audit[5112]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10116 a0=3 a1=ffffd766c7d0 a2=0 a3=1 items=0 ppid=2213 pid=5112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:18.070000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:18.651194 env[1324]: time="2025-07-12T00:37:18.651144329Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:18.652690 env[1324]: time="2025-07-12T00:37:18.652657474Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:18.654409 env[1324]: time="2025-07-12T00:37:18.654374153Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:18.655562 env[1324]: time="2025-07-12T00:37:18.655535193Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:37:18.655917 env[1324]: time="2025-07-12T00:37:18.655891498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 12 00:37:18.659287 env[1324]: time="2025-07-12T00:37:18.659240290Z" level=info msg="CreateContainer within sandbox \"2bc897b53a8bf5cc90ed7204dd4f4427e70fff3dadb86461be70a890ca32b366\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 12 00:37:18.670946 env[1324]: time="2025-07-12T00:37:18.670725244Z" level=info msg="CreateContainer within sandbox \"2bc897b53a8bf5cc90ed7204dd4f4427e70fff3dadb86461be70a890ca32b366\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1919dfe99dba560d197e45ebb777611b382d0a76f0712e1a1783bd9c130395f6\"" Jul 12 00:37:18.671782 env[1324]: time="2025-07-12T00:37:18.671744595Z" level=info msg="StartContainer for \"1919dfe99dba560d197e45ebb777611b382d0a76f0712e1a1783bd9c130395f6\"" Jul 12 00:37:18.730904 env[1324]: time="2025-07-12T00:37:18.730853525Z" level=info msg="StartContainer for \"1919dfe99dba560d197e45ebb777611b382d0a76f0712e1a1783bd9c130395f6\" returns successfully" Jul 12 00:37:19.499824 kubelet[2103]: I0712 00:37:19.499781 2103 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 12 00:37:19.500206 kubelet[2103]: I0712 00:37:19.499835 2103 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 12 00:37:22.444834 systemd[1]: Started sshd@12-10.0.0.111:22-10.0.0.1:51894.service. Jul 12 00:37:22.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.111:22-10.0.0.1:51894 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:22.448322 kernel: kauditd_printk_skb: 47 callbacks suppressed Jul 12 00:37:22.448421 kernel: audit: type=1130 audit(1752280642.443:464): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.111:22-10.0.0.1:51894 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:22.500000 audit[5168]: USER_ACCT pid=5168 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:22.502054 sshd[5168]: Accepted publickey for core from 10.0.0.1 port 51894 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:37:22.502000 audit[5168]: CRED_ACQ pid=5168 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:22.505186 sshd[5168]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:37:22.508113 kernel: audit: type=1101 audit(1752280642.500:465): pid=5168 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:22.509264 kernel: audit: type=1103 audit(1752280642.502:466): pid=5168 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:22.509313 kernel: audit: type=1006 audit(1752280642.502:467): pid=5168 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jul 12 00:37:22.509549 systemd-logind[1309]: New session 13 of user core. Jul 12 00:37:22.510129 systemd[1]: Started session-13.scope. Jul 12 00:37:22.502000 audit[5168]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd32f5890 a2=3 a3=1 items=0 ppid=1 pid=5168 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:22.513463 kernel: audit: type=1300 audit(1752280642.502:467): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd32f5890 a2=3 a3=1 items=0 ppid=1 pid=5168 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:22.513534 kernel: audit: type=1327 audit(1752280642.502:467): proctitle=737368643A20636F7265205B707269765D Jul 12 00:37:22.502000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:37:22.514000 audit[5168]: USER_START pid=5168 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:22.515000 audit[5171]: CRED_ACQ pid=5171 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:22.523951 kernel: audit: type=1105 audit(1752280642.514:468): pid=5168 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:22.524034 kernel: audit: type=1103 audit(1752280642.515:469): pid=5171 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:22.660041 sshd[5168]: pam_unix(sshd:session): session closed for user core Jul 12 00:37:22.659000 audit[5168]: USER_END pid=5168 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:22.662928 systemd[1]: sshd@12-10.0.0.111:22-10.0.0.1:51894.service: Deactivated successfully. Jul 12 00:37:22.663865 systemd-logind[1309]: Session 13 logged out. Waiting for processes to exit. Jul 12 00:37:22.663929 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 00:37:22.664561 systemd-logind[1309]: Removed session 13. Jul 12 00:37:22.660000 audit[5168]: CRED_DISP pid=5168 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:22.667571 kernel: audit: type=1106 audit(1752280642.659:470): pid=5168 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:22.667637 kernel: audit: type=1104 audit(1752280642.660:471): pid=5168 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:22.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.111:22-10.0.0.1:51894 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:23.452736 kubelet[2103]: I0712 00:37:23.452079 2103 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:37:23.485642 kubelet[2103]: I0712 00:37:23.485590 2103 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-79v58" podStartSLOduration=28.754121422 podStartE2EDuration="36.485564578s" podCreationTimestamp="2025-07-12 00:36:47 +0000 UTC" firstStartedPulling="2025-07-12 00:37:10.925790315 +0000 UTC m=+45.600997173" lastFinishedPulling="2025-07-12 00:37:18.657233431 +0000 UTC m=+53.332440329" observedRunningTime="2025-07-12 00:37:19.638054207 +0000 UTC m=+54.313261105" watchObservedRunningTime="2025-07-12 00:37:23.485564578 +0000 UTC m=+58.160771476" Jul 12 00:37:23.505000 audit[5183]: NETFILTER_CFG table=filter:128 family=2 entries=9 op=nft_register_rule pid=5183 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:23.505000 audit[5183]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=fffff8515840 a2=0 a3=1 items=0 ppid=2213 pid=5183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:23.505000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:23.513000 audit[5183]: NETFILTER_CFG table=nat:129 family=2 entries=31 op=nft_register_chain pid=5183 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:23.513000 audit[5183]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10884 a0=3 a1=fffff8515840 a2=0 a3=1 items=0 ppid=2213 pid=5183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:23.513000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:25.407074 env[1324]: time="2025-07-12T00:37:25.406849321Z" level=info msg="StopPodSandbox for \"da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13\"" Jul 12 00:37:25.530998 env[1324]: 2025-07-12 00:37:25.484 [WARNING][5194] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d9dcbc845--hgq6q-eth0", GenerateName:"calico-apiserver-7d9dcbc845-", Namespace:"calico-apiserver", SelfLink:"", UID:"faaca0cb-1548-4256-82f4-00433e531079", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 36, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d9dcbc845", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d42fec67b342518352f397bb63d3e583ac00357e203617d6bdaef25410339430", Pod:"calico-apiserver-7d9dcbc845-hgq6q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia47a7df3e05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:37:25.530998 env[1324]: 2025-07-12 00:37:25.485 [INFO][5194] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" Jul 12 00:37:25.530998 env[1324]: 2025-07-12 00:37:25.485 [INFO][5194] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" iface="eth0" netns="" Jul 12 00:37:25.530998 env[1324]: 2025-07-12 00:37:25.485 [INFO][5194] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" Jul 12 00:37:25.530998 env[1324]: 2025-07-12 00:37:25.485 [INFO][5194] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" Jul 12 00:37:25.530998 env[1324]: 2025-07-12 00:37:25.516 [INFO][5204] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" HandleID="k8s-pod-network.da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" Workload="localhost-k8s-calico--apiserver--7d9dcbc845--hgq6q-eth0" Jul 12 00:37:25.530998 env[1324]: 2025-07-12 00:37:25.516 [INFO][5204] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:37:25.530998 env[1324]: 2025-07-12 00:37:25.516 [INFO][5204] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:37:25.530998 env[1324]: 2025-07-12 00:37:25.524 [WARNING][5204] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" HandleID="k8s-pod-network.da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" Workload="localhost-k8s-calico--apiserver--7d9dcbc845--hgq6q-eth0" Jul 12 00:37:25.530998 env[1324]: 2025-07-12 00:37:25.525 [INFO][5204] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" HandleID="k8s-pod-network.da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" Workload="localhost-k8s-calico--apiserver--7d9dcbc845--hgq6q-eth0" Jul 12 00:37:25.530998 env[1324]: 2025-07-12 00:37:25.526 [INFO][5204] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:37:25.530998 env[1324]: 2025-07-12 00:37:25.529 [INFO][5194] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" Jul 12 00:37:25.531652 env[1324]: time="2025-07-12T00:37:25.531176813Z" level=info msg="TearDown network for sandbox \"da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13\" successfully" Jul 12 00:37:25.531717 env[1324]: time="2025-07-12T00:37:25.531689806Z" level=info msg="StopPodSandbox for \"da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13\" returns successfully" Jul 12 00:37:25.532370 env[1324]: time="2025-07-12T00:37:25.532323846Z" level=info msg="RemovePodSandbox for \"da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13\"" Jul 12 00:37:25.532499 env[1324]: time="2025-07-12T00:37:25.532373009Z" level=info msg="Forcibly stopping sandbox \"da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13\"" Jul 12 00:37:25.608457 env[1324]: 2025-07-12 00:37:25.570 [WARNING][5222] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d9dcbc845--hgq6q-eth0", GenerateName:"calico-apiserver-7d9dcbc845-", Namespace:"calico-apiserver", SelfLink:"", UID:"faaca0cb-1548-4256-82f4-00433e531079", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 36, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d9dcbc845", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d42fec67b342518352f397bb63d3e583ac00357e203617d6bdaef25410339430", Pod:"calico-apiserver-7d9dcbc845-hgq6q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia47a7df3e05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:37:25.608457 env[1324]: 2025-07-12 00:37:25.570 [INFO][5222] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" Jul 12 00:37:25.608457 env[1324]: 2025-07-12 00:37:25.570 [INFO][5222] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" iface="eth0" netns="" Jul 12 00:37:25.608457 env[1324]: 2025-07-12 00:37:25.570 [INFO][5222] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" Jul 12 00:37:25.608457 env[1324]: 2025-07-12 00:37:25.570 [INFO][5222] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" Jul 12 00:37:25.608457 env[1324]: 2025-07-12 00:37:25.590 [INFO][5231] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" HandleID="k8s-pod-network.da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" Workload="localhost-k8s-calico--apiserver--7d9dcbc845--hgq6q-eth0" Jul 12 00:37:25.608457 env[1324]: 2025-07-12 00:37:25.590 [INFO][5231] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:37:25.608457 env[1324]: 2025-07-12 00:37:25.590 [INFO][5231] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:37:25.608457 env[1324]: 2025-07-12 00:37:25.602 [WARNING][5231] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" HandleID="k8s-pod-network.da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" Workload="localhost-k8s-calico--apiserver--7d9dcbc845--hgq6q-eth0" Jul 12 00:37:25.608457 env[1324]: 2025-07-12 00:37:25.602 [INFO][5231] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" HandleID="k8s-pod-network.da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" Workload="localhost-k8s-calico--apiserver--7d9dcbc845--hgq6q-eth0" Jul 12 00:37:25.608457 env[1324]: 2025-07-12 00:37:25.603 [INFO][5231] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:37:25.608457 env[1324]: 2025-07-12 00:37:25.605 [INFO][5222] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13" Jul 12 00:37:25.608898 env[1324]: time="2025-07-12T00:37:25.608500272Z" level=info msg="TearDown network for sandbox \"da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13\" successfully" Jul 12 00:37:25.649972 env[1324]: time="2025-07-12T00:37:25.649908554Z" level=info msg="RemovePodSandbox \"da72391fc0dfd20b0a79b1e936dd50addc77e0798e07e7e9e82680daff3f6a13\" returns successfully" Jul 12 00:37:25.650687 env[1324]: time="2025-07-12T00:37:25.650598798Z" level=info msg="StopPodSandbox for \"f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7\"" Jul 12 00:37:25.722707 env[1324]: 2025-07-12 00:37:25.684 [WARNING][5249] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--dfvjl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3fa0a624-ecf9-48dd-83d5-27860d361813", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 36, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"faaf728a8da2fb1c8bb9684deace59b3182595d8561b55197da6abcd92388370", Pod:"coredns-7c65d6cfc9-dfvjl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic9f3c3023a5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:37:25.722707 env[1324]: 2025-07-12 00:37:25.684 [INFO][5249] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" Jul 12 00:37:25.722707 env[1324]: 2025-07-12 00:37:25.684 [INFO][5249] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" iface="eth0" netns="" Jul 12 00:37:25.722707 env[1324]: 2025-07-12 00:37:25.684 [INFO][5249] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" Jul 12 00:37:25.722707 env[1324]: 2025-07-12 00:37:25.684 [INFO][5249] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" Jul 12 00:37:25.722707 env[1324]: 2025-07-12 00:37:25.708 [INFO][5258] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" HandleID="k8s-pod-network.f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" Workload="localhost-k8s-coredns--7c65d6cfc9--dfvjl-eth0" Jul 12 00:37:25.722707 env[1324]: 2025-07-12 00:37:25.708 [INFO][5258] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:37:25.722707 env[1324]: 2025-07-12 00:37:25.708 [INFO][5258] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:37:25.722707 env[1324]: 2025-07-12 00:37:25.717 [WARNING][5258] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" HandleID="k8s-pod-network.f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" Workload="localhost-k8s-coredns--7c65d6cfc9--dfvjl-eth0" Jul 12 00:37:25.722707 env[1324]: 2025-07-12 00:37:25.717 [INFO][5258] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" HandleID="k8s-pod-network.f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" Workload="localhost-k8s-coredns--7c65d6cfc9--dfvjl-eth0" Jul 12 00:37:25.722707 env[1324]: 2025-07-12 00:37:25.719 [INFO][5258] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:37:25.722707 env[1324]: 2025-07-12 00:37:25.721 [INFO][5249] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" Jul 12 00:37:25.723184 env[1324]: time="2025-07-12T00:37:25.722735451Z" level=info msg="TearDown network for sandbox \"f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7\" successfully" Jul 12 00:37:25.723184 env[1324]: time="2025-07-12T00:37:25.722765172Z" level=info msg="StopPodSandbox for \"f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7\" returns successfully" Jul 12 00:37:25.723475 env[1324]: time="2025-07-12T00:37:25.723443135Z" level=info msg="RemovePodSandbox for \"f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7\"" Jul 12 00:37:25.723598 env[1324]: time="2025-07-12T00:37:25.723560542Z" level=info msg="Forcibly stopping sandbox \"f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7\"" Jul 12 00:37:25.787971 env[1324]: 2025-07-12 00:37:25.755 [WARNING][5276] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--dfvjl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3fa0a624-ecf9-48dd-83d5-27860d361813", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 36, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"faaf728a8da2fb1c8bb9684deace59b3182595d8561b55197da6abcd92388370", Pod:"coredns-7c65d6cfc9-dfvjl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic9f3c3023a5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:37:25.787971 env[1324]: 2025-07-12 00:37:25.755 [INFO][5276] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" Jul 12 00:37:25.787971 env[1324]: 2025-07-12 00:37:25.755 [INFO][5276] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" iface="eth0" netns="" Jul 12 00:37:25.787971 env[1324]: 2025-07-12 00:37:25.755 [INFO][5276] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" Jul 12 00:37:25.787971 env[1324]: 2025-07-12 00:37:25.755 [INFO][5276] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" Jul 12 00:37:25.787971 env[1324]: 2025-07-12 00:37:25.773 [INFO][5285] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" HandleID="k8s-pod-network.f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" Workload="localhost-k8s-coredns--7c65d6cfc9--dfvjl-eth0" Jul 12 00:37:25.787971 env[1324]: 2025-07-12 00:37:25.773 [INFO][5285] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:37:25.787971 env[1324]: 2025-07-12 00:37:25.773 [INFO][5285] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:37:25.787971 env[1324]: 2025-07-12 00:37:25.783 [WARNING][5285] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" HandleID="k8s-pod-network.f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" Workload="localhost-k8s-coredns--7c65d6cfc9--dfvjl-eth0" Jul 12 00:37:25.787971 env[1324]: 2025-07-12 00:37:25.783 [INFO][5285] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" HandleID="k8s-pod-network.f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" Workload="localhost-k8s-coredns--7c65d6cfc9--dfvjl-eth0" Jul 12 00:37:25.787971 env[1324]: 2025-07-12 00:37:25.784 [INFO][5285] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:37:25.787971 env[1324]: 2025-07-12 00:37:25.786 [INFO][5276] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7" Jul 12 00:37:25.788436 env[1324]: time="2025-07-12T00:37:25.788006872Z" level=info msg="TearDown network for sandbox \"f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7\" successfully" Jul 12 00:37:25.791209 env[1324]: time="2025-07-12T00:37:25.791168191Z" level=info msg="RemovePodSandbox \"f9a07aa14be262ce3a2af0ccb87b2a272b4160c6a5588ec8bc133f33a4fcc7f7\" returns successfully" Jul 12 00:37:25.791711 env[1324]: time="2025-07-12T00:37:25.791684263Z" level=info msg="StopPodSandbox for \"c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587\"" Jul 12 00:37:25.858920 env[1324]: 2025-07-12 00:37:25.824 [WARNING][5303] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--nkzk8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3888f770-0a64-4382-86fc-ba4105786dc9", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 36, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1c6beb9006060003e8970e7571fb5d5eb34a7b5ede836e2d65b13dea1e35ea3c", Pod:"coredns-7c65d6cfc9-nkzk8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ea13a1589a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:37:25.858920 env[1324]: 2025-07-12 00:37:25.824 [INFO][5303] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" Jul 12 00:37:25.858920 env[1324]: 2025-07-12 00:37:25.824 [INFO][5303] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" iface="eth0" netns="" Jul 12 00:37:25.858920 env[1324]: 2025-07-12 00:37:25.824 [INFO][5303] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" Jul 12 00:37:25.858920 env[1324]: 2025-07-12 00:37:25.824 [INFO][5303] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" Jul 12 00:37:25.858920 env[1324]: 2025-07-12 00:37:25.843 [INFO][5312] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" HandleID="k8s-pod-network.c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" Workload="localhost-k8s-coredns--7c65d6cfc9--nkzk8-eth0" Jul 12 00:37:25.858920 env[1324]: 2025-07-12 00:37:25.843 [INFO][5312] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:37:25.858920 env[1324]: 2025-07-12 00:37:25.843 [INFO][5312] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:37:25.858920 env[1324]: 2025-07-12 00:37:25.853 [WARNING][5312] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" HandleID="k8s-pod-network.c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" Workload="localhost-k8s-coredns--7c65d6cfc9--nkzk8-eth0" Jul 12 00:37:25.858920 env[1324]: 2025-07-12 00:37:25.853 [INFO][5312] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" HandleID="k8s-pod-network.c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" Workload="localhost-k8s-coredns--7c65d6cfc9--nkzk8-eth0" Jul 12 00:37:25.858920 env[1324]: 2025-07-12 00:37:25.855 [INFO][5312] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:37:25.858920 env[1324]: 2025-07-12 00:37:25.857 [INFO][5303] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" Jul 12 00:37:25.859496 env[1324]: time="2025-07-12T00:37:25.859459602Z" level=info msg="TearDown network for sandbox \"c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587\" successfully" Jul 12 00:37:25.859572 env[1324]: time="2025-07-12T00:37:25.859550968Z" level=info msg="StopPodSandbox for \"c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587\" returns successfully" Jul 12 00:37:25.860188 env[1324]: time="2025-07-12T00:37:25.860141005Z" level=info msg="RemovePodSandbox for \"c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587\"" Jul 12 00:37:25.860249 env[1324]: time="2025-07-12T00:37:25.860190768Z" level=info msg="Forcibly stopping sandbox \"c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587\"" Jul 12 00:37:25.928420 env[1324]: 2025-07-12 00:37:25.896 [WARNING][5330] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--nkzk8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3888f770-0a64-4382-86fc-ba4105786dc9", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 36, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1c6beb9006060003e8970e7571fb5d5eb34a7b5ede836e2d65b13dea1e35ea3c", Pod:"coredns-7c65d6cfc9-nkzk8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ea13a1589a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:37:25.928420 env[1324]: 2025-07-12 00:37:25.896 [INFO][5330] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" Jul 12 00:37:25.928420 env[1324]: 2025-07-12 00:37:25.896 [INFO][5330] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" iface="eth0" netns="" Jul 12 00:37:25.928420 env[1324]: 2025-07-12 00:37:25.896 [INFO][5330] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" Jul 12 00:37:25.928420 env[1324]: 2025-07-12 00:37:25.896 [INFO][5330] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" Jul 12 00:37:25.928420 env[1324]: 2025-07-12 00:37:25.914 [INFO][5338] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" HandleID="k8s-pod-network.c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" Workload="localhost-k8s-coredns--7c65d6cfc9--nkzk8-eth0" Jul 12 00:37:25.928420 env[1324]: 2025-07-12 00:37:25.914 [INFO][5338] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:37:25.928420 env[1324]: 2025-07-12 00:37:25.914 [INFO][5338] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:37:25.928420 env[1324]: 2025-07-12 00:37:25.923 [WARNING][5338] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" HandleID="k8s-pod-network.c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" Workload="localhost-k8s-coredns--7c65d6cfc9--nkzk8-eth0" Jul 12 00:37:25.928420 env[1324]: 2025-07-12 00:37:25.923 [INFO][5338] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" HandleID="k8s-pod-network.c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" Workload="localhost-k8s-coredns--7c65d6cfc9--nkzk8-eth0" Jul 12 00:37:25.928420 env[1324]: 2025-07-12 00:37:25.925 [INFO][5338] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:37:25.928420 env[1324]: 2025-07-12 00:37:25.926 [INFO][5330] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587" Jul 12 00:37:25.928876 env[1324]: time="2025-07-12T00:37:25.928449297Z" level=info msg="TearDown network for sandbox \"c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587\" successfully" Jul 12 00:37:25.931487 env[1324]: time="2025-07-12T00:37:25.931451646Z" level=info msg="RemovePodSandbox \"c28fd334c8a98001fffcd8e00da4a01d8520b3e3768dccc83faf137b91244587\" returns successfully" Jul 12 00:37:25.931945 env[1324]: time="2025-07-12T00:37:25.931899354Z" level=info msg="StopPodSandbox for \"7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5\"" Jul 12 00:37:25.995615 env[1324]: 2025-07-12 00:37:25.963 [WARNING][5356] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--79v58-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6f21bec0-521a-455c-b964-ef73ea0151cf", ResourceVersion:"1150", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 36, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2bc897b53a8bf5cc90ed7204dd4f4427e70fff3dadb86461be70a890ca32b366", Pod:"csi-node-driver-79v58", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia143b048e89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:37:25.995615 env[1324]: 2025-07-12 00:37:25.963 [INFO][5356] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" Jul 12 00:37:25.995615 env[1324]: 2025-07-12 00:37:25.963 [INFO][5356] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" iface="eth0" netns="" Jul 12 00:37:25.995615 env[1324]: 2025-07-12 00:37:25.963 [INFO][5356] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" Jul 12 00:37:25.995615 env[1324]: 2025-07-12 00:37:25.963 [INFO][5356] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" Jul 12 00:37:25.995615 env[1324]: 2025-07-12 00:37:25.981 [INFO][5365] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" HandleID="k8s-pod-network.7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" Workload="localhost-k8s-csi--node--driver--79v58-eth0" Jul 12 00:37:25.995615 env[1324]: 2025-07-12 00:37:25.981 [INFO][5365] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:37:25.995615 env[1324]: 2025-07-12 00:37:25.981 [INFO][5365] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:37:25.995615 env[1324]: 2025-07-12 00:37:25.989 [WARNING][5365] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" HandleID="k8s-pod-network.7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" Workload="localhost-k8s-csi--node--driver--79v58-eth0" Jul 12 00:37:25.995615 env[1324]: 2025-07-12 00:37:25.989 [INFO][5365] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" HandleID="k8s-pod-network.7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" Workload="localhost-k8s-csi--node--driver--79v58-eth0" Jul 12 00:37:25.995615 env[1324]: 2025-07-12 00:37:25.991 [INFO][5365] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:37:25.995615 env[1324]: 2025-07-12 00:37:25.993 [INFO][5356] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" Jul 12 00:37:25.996140 env[1324]: time="2025-07-12T00:37:25.996098348Z" level=info msg="TearDown network for sandbox \"7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5\" successfully" Jul 12 00:37:25.996210 env[1324]: time="2025-07-12T00:37:25.996194034Z" level=info msg="StopPodSandbox for \"7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5\" returns successfully" Jul 12 00:37:25.997142 env[1324]: time="2025-07-12T00:37:25.997044768Z" level=info msg="RemovePodSandbox for \"7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5\"" Jul 12 00:37:25.997213 env[1324]: time="2025-07-12T00:37:25.997147374Z" level=info msg="Forcibly stopping sandbox \"7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5\"" Jul 12 00:37:26.062416 env[1324]: 2025-07-12 00:37:26.028 [WARNING][5382] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--79v58-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6f21bec0-521a-455c-b964-ef73ea0151cf", ResourceVersion:"1150", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 36, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2bc897b53a8bf5cc90ed7204dd4f4427e70fff3dadb86461be70a890ca32b366", Pod:"csi-node-driver-79v58", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia143b048e89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:37:26.062416 env[1324]: 2025-07-12 00:37:26.028 [INFO][5382] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" Jul 12 00:37:26.062416 env[1324]: 2025-07-12 00:37:26.028 [INFO][5382] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" iface="eth0" netns="" Jul 12 00:37:26.062416 env[1324]: 2025-07-12 00:37:26.028 [INFO][5382] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" Jul 12 00:37:26.062416 env[1324]: 2025-07-12 00:37:26.028 [INFO][5382] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" Jul 12 00:37:26.062416 env[1324]: 2025-07-12 00:37:26.047 [INFO][5392] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" HandleID="k8s-pod-network.7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" Workload="localhost-k8s-csi--node--driver--79v58-eth0" Jul 12 00:37:26.062416 env[1324]: 2025-07-12 00:37:26.047 [INFO][5392] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:37:26.062416 env[1324]: 2025-07-12 00:37:26.047 [INFO][5392] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:37:26.062416 env[1324]: 2025-07-12 00:37:26.056 [WARNING][5392] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" HandleID="k8s-pod-network.7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" Workload="localhost-k8s-csi--node--driver--79v58-eth0" Jul 12 00:37:26.062416 env[1324]: 2025-07-12 00:37:26.056 [INFO][5392] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" HandleID="k8s-pod-network.7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" Workload="localhost-k8s-csi--node--driver--79v58-eth0" Jul 12 00:37:26.062416 env[1324]: 2025-07-12 00:37:26.058 [INFO][5392] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:37:26.062416 env[1324]: 2025-07-12 00:37:26.059 [INFO][5382] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5" Jul 12 00:37:26.062846 env[1324]: time="2025-07-12T00:37:26.062445834Z" level=info msg="TearDown network for sandbox \"7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5\" successfully" Jul 12 00:37:26.065548 env[1324]: time="2025-07-12T00:37:26.065518785Z" level=info msg="RemovePodSandbox \"7448799b7bfde94d03481ea20881cdc8df92bfb0c838862b9b5b1c094e94c9d5\" returns successfully" Jul 12 00:37:26.066056 env[1324]: time="2025-07-12T00:37:26.066034377Z" level=info msg="StopPodSandbox for \"939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273\"" Jul 12 00:37:26.128526 env[1324]: 2025-07-12 00:37:26.096 [WARNING][5410] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" WorkloadEndpoint="localhost-k8s-whisker--6d9765465--897pg-eth0" Jul 12 00:37:26.128526 env[1324]: 2025-07-12 00:37:26.096 [INFO][5410] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" Jul 12 00:37:26.128526 env[1324]: 2025-07-12 00:37:26.096 [INFO][5410] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" iface="eth0" netns="" Jul 12 00:37:26.128526 env[1324]: 2025-07-12 00:37:26.096 [INFO][5410] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" Jul 12 00:37:26.128526 env[1324]: 2025-07-12 00:37:26.096 [INFO][5410] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" Jul 12 00:37:26.128526 env[1324]: 2025-07-12 00:37:26.115 [INFO][5420] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" HandleID="k8s-pod-network.939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" Workload="localhost-k8s-whisker--6d9765465--897pg-eth0" Jul 12 00:37:26.128526 env[1324]: 2025-07-12 00:37:26.115 [INFO][5420] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:37:26.128526 env[1324]: 2025-07-12 00:37:26.115 [INFO][5420] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:37:26.128526 env[1324]: 2025-07-12 00:37:26.123 [WARNING][5420] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" HandleID="k8s-pod-network.939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" Workload="localhost-k8s-whisker--6d9765465--897pg-eth0" Jul 12 00:37:26.128526 env[1324]: 2025-07-12 00:37:26.123 [INFO][5420] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" HandleID="k8s-pod-network.939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" Workload="localhost-k8s-whisker--6d9765465--897pg-eth0" Jul 12 00:37:26.128526 env[1324]: 2025-07-12 00:37:26.125 [INFO][5420] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:37:26.128526 env[1324]: 2025-07-12 00:37:26.127 [INFO][5410] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" Jul 12 00:37:26.128942 env[1324]: time="2025-07-12T00:37:26.128552862Z" level=info msg="TearDown network for sandbox \"939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273\" successfully" Jul 12 00:37:26.128942 env[1324]: time="2025-07-12T00:37:26.128581584Z" level=info msg="StopPodSandbox for \"939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273\" returns successfully" Jul 12 00:37:26.129421 env[1324]: time="2025-07-12T00:37:26.129363112Z" level=info msg="RemovePodSandbox for \"939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273\"" Jul 12 00:37:26.129567 env[1324]: time="2025-07-12T00:37:26.129527083Z" level=info msg="Forcibly stopping sandbox \"939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273\"" Jul 12 00:37:26.192618 env[1324]: 2025-07-12 00:37:26.160 [WARNING][5437] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" WorkloadEndpoint="localhost-k8s-whisker--6d9765465--897pg-eth0" Jul 12 00:37:26.192618 env[1324]: 2025-07-12 00:37:26.160 [INFO][5437] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" Jul 12 00:37:26.192618 env[1324]: 2025-07-12 00:37:26.160 [INFO][5437] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" iface="eth0" netns="" Jul 12 00:37:26.192618 env[1324]: 2025-07-12 00:37:26.160 [INFO][5437] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" Jul 12 00:37:26.192618 env[1324]: 2025-07-12 00:37:26.160 [INFO][5437] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" Jul 12 00:37:26.192618 env[1324]: 2025-07-12 00:37:26.180 [INFO][5446] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" HandleID="k8s-pod-network.939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" Workload="localhost-k8s-whisker--6d9765465--897pg-eth0" Jul 12 00:37:26.192618 env[1324]: 2025-07-12 00:37:26.180 [INFO][5446] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:37:26.192618 env[1324]: 2025-07-12 00:37:26.180 [INFO][5446] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:37:26.192618 env[1324]: 2025-07-12 00:37:26.188 [WARNING][5446] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" HandleID="k8s-pod-network.939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" Workload="localhost-k8s-whisker--6d9765465--897pg-eth0" Jul 12 00:37:26.192618 env[1324]: 2025-07-12 00:37:26.188 [INFO][5446] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" HandleID="k8s-pod-network.939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" Workload="localhost-k8s-whisker--6d9765465--897pg-eth0" Jul 12 00:37:26.192618 env[1324]: 2025-07-12 00:37:26.189 [INFO][5446] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:37:26.192618 env[1324]: 2025-07-12 00:37:26.191 [INFO][5437] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273" Jul 12 00:37:26.193049 env[1324]: time="2025-07-12T00:37:26.193015628Z" level=info msg="TearDown network for sandbox \"939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273\" successfully" Jul 12 00:37:26.196018 env[1324]: time="2025-07-12T00:37:26.195990613Z" level=info msg="RemovePodSandbox \"939a1ea73c2c8bb89a8c00f064f2f72aa0d6020330b717615e40967d4a72d273\" returns successfully" Jul 12 00:37:26.196628 env[1324]: time="2025-07-12T00:37:26.196580249Z" level=info msg="StopPodSandbox for \"6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c\"" Jul 12 00:37:26.260125 env[1324]: 2025-07-12 00:37:26.227 [WARNING][5464] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--6mdg7-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"118681c8-63c7-4aed-ae42-07c9da34ea65", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 36, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9b4fd99b36ce2077b24854f58397b8171a3d5754355d0ab316e3db2e36b90329", Pod:"goldmane-58fd7646b9-6mdg7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie003591177a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:37:26.260125 env[1324]: 2025-07-12 00:37:26.227 [INFO][5464] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" Jul 12 00:37:26.260125 env[1324]: 2025-07-12 00:37:26.227 [INFO][5464] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" iface="eth0" netns="" Jul 12 00:37:26.260125 env[1324]: 2025-07-12 00:37:26.227 [INFO][5464] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" Jul 12 00:37:26.260125 env[1324]: 2025-07-12 00:37:26.227 [INFO][5464] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" Jul 12 00:37:26.260125 env[1324]: 2025-07-12 00:37:26.245 [INFO][5472] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" HandleID="k8s-pod-network.6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" Workload="localhost-k8s-goldmane--58fd7646b9--6mdg7-eth0" Jul 12 00:37:26.260125 env[1324]: 2025-07-12 00:37:26.245 [INFO][5472] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:37:26.260125 env[1324]: 2025-07-12 00:37:26.245 [INFO][5472] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:37:26.260125 env[1324]: 2025-07-12 00:37:26.254 [WARNING][5472] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" HandleID="k8s-pod-network.6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" Workload="localhost-k8s-goldmane--58fd7646b9--6mdg7-eth0" Jul 12 00:37:26.260125 env[1324]: 2025-07-12 00:37:26.254 [INFO][5472] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" HandleID="k8s-pod-network.6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" Workload="localhost-k8s-goldmane--58fd7646b9--6mdg7-eth0" Jul 12 00:37:26.260125 env[1324]: 2025-07-12 00:37:26.255 [INFO][5472] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:37:26.260125 env[1324]: 2025-07-12 00:37:26.257 [INFO][5464] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" Jul 12 00:37:26.260611 env[1324]: time="2025-07-12T00:37:26.260575466Z" level=info msg="TearDown network for sandbox \"6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c\" successfully" Jul 12 00:37:26.260734 env[1324]: time="2025-07-12T00:37:26.260714435Z" level=info msg="StopPodSandbox for \"6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c\" returns successfully" Jul 12 00:37:26.261436 env[1324]: time="2025-07-12T00:37:26.261406758Z" level=info msg="RemovePodSandbox for \"6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c\"" Jul 12 00:37:26.261508 env[1324]: time="2025-07-12T00:37:26.261443520Z" level=info msg="Forcibly stopping sandbox \"6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c\"" Jul 12 00:37:26.329201 env[1324]: 2025-07-12 00:37:26.295 [WARNING][5491] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--6mdg7-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"118681c8-63c7-4aed-ae42-07c9da34ea65", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 36, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9b4fd99b36ce2077b24854f58397b8171a3d5754355d0ab316e3db2e36b90329", Pod:"goldmane-58fd7646b9-6mdg7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie003591177a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:37:26.329201 env[1324]: 2025-07-12 00:37:26.297 [INFO][5491] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" Jul 12 00:37:26.329201 env[1324]: 2025-07-12 00:37:26.297 [INFO][5491] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" iface="eth0" netns="" Jul 12 00:37:26.329201 env[1324]: 2025-07-12 00:37:26.297 [INFO][5491] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" Jul 12 00:37:26.329201 env[1324]: 2025-07-12 00:37:26.297 [INFO][5491] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" Jul 12 00:37:26.329201 env[1324]: 2025-07-12 00:37:26.315 [INFO][5500] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" HandleID="k8s-pod-network.6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" Workload="localhost-k8s-goldmane--58fd7646b9--6mdg7-eth0" Jul 12 00:37:26.329201 env[1324]: 2025-07-12 00:37:26.315 [INFO][5500] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:37:26.329201 env[1324]: 2025-07-12 00:37:26.315 [INFO][5500] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:37:26.329201 env[1324]: 2025-07-12 00:37:26.324 [WARNING][5500] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" HandleID="k8s-pod-network.6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" Workload="localhost-k8s-goldmane--58fd7646b9--6mdg7-eth0" Jul 12 00:37:26.329201 env[1324]: 2025-07-12 00:37:26.324 [INFO][5500] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" HandleID="k8s-pod-network.6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" Workload="localhost-k8s-goldmane--58fd7646b9--6mdg7-eth0" Jul 12 00:37:26.329201 env[1324]: 2025-07-12 00:37:26.325 [INFO][5500] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:37:26.329201 env[1324]: 2025-07-12 00:37:26.327 [INFO][5491] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c" Jul 12 00:37:26.329649 env[1324]: time="2025-07-12T00:37:26.329229492Z" level=info msg="TearDown network for sandbox \"6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c\" successfully" Jul 12 00:37:26.332226 env[1324]: time="2025-07-12T00:37:26.332198597Z" level=info msg="RemovePodSandbox \"6beecd5474eebf9033be5bde054314104cfad81cd34c5842e4c9f2984873a60c\" returns successfully" Jul 12 00:37:26.332657 env[1324]: time="2025-07-12T00:37:26.332631784Z" level=info msg="StopPodSandbox for \"37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61\"" Jul 12 00:37:26.397520 env[1324]: 2025-07-12 00:37:26.363 [WARNING][5519] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d9dcbc845--6nfvq-eth0", GenerateName:"calico-apiserver-7d9dcbc845-", Namespace:"calico-apiserver", SelfLink:"", UID:"fecfff6f-79d3-4090-96c1-83913da0527a", ResourceVersion:"1168", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 36, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d9dcbc845", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dc2bb6f1488e180aeb18054610bb6436630079b8cf6e25740c7d44b9fcd201d7", Pod:"calico-apiserver-7d9dcbc845-6nfvq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7d6c70dc994", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:37:26.397520 env[1324]: 2025-07-12 00:37:26.363 [INFO][5519] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" Jul 12 00:37:26.397520 env[1324]: 2025-07-12 00:37:26.363 [INFO][5519] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" iface="eth0" netns="" Jul 12 00:37:26.397520 env[1324]: 2025-07-12 00:37:26.363 [INFO][5519] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" Jul 12 00:37:26.397520 env[1324]: 2025-07-12 00:37:26.363 [INFO][5519] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" Jul 12 00:37:26.397520 env[1324]: 2025-07-12 00:37:26.384 [INFO][5529] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" HandleID="k8s-pod-network.37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" Workload="localhost-k8s-calico--apiserver--7d9dcbc845--6nfvq-eth0" Jul 12 00:37:26.397520 env[1324]: 2025-07-12 00:37:26.384 [INFO][5529] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:37:26.397520 env[1324]: 2025-07-12 00:37:26.384 [INFO][5529] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:37:26.397520 env[1324]: 2025-07-12 00:37:26.392 [WARNING][5529] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" HandleID="k8s-pod-network.37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" Workload="localhost-k8s-calico--apiserver--7d9dcbc845--6nfvq-eth0" Jul 12 00:37:26.397520 env[1324]: 2025-07-12 00:37:26.392 [INFO][5529] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" HandleID="k8s-pod-network.37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" Workload="localhost-k8s-calico--apiserver--7d9dcbc845--6nfvq-eth0" Jul 12 00:37:26.397520 env[1324]: 2025-07-12 00:37:26.394 [INFO][5529] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:37:26.397520 env[1324]: 2025-07-12 00:37:26.396 [INFO][5519] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" Jul 12 00:37:26.397938 env[1324]: time="2025-07-12T00:37:26.397545338Z" level=info msg="TearDown network for sandbox \"37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61\" successfully" Jul 12 00:37:26.397938 env[1324]: time="2025-07-12T00:37:26.397576100Z" level=info msg="StopPodSandbox for \"37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61\" returns successfully" Jul 12 00:37:26.398033 env[1324]: time="2025-07-12T00:37:26.397995806Z" level=info msg="RemovePodSandbox for \"37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61\"" Jul 12 00:37:26.398095 env[1324]: time="2025-07-12T00:37:26.398032608Z" level=info msg="Forcibly stopping sandbox \"37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61\"" Jul 12 00:37:26.461197 env[1324]: 2025-07-12 00:37:26.429 [WARNING][5547] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d9dcbc845--6nfvq-eth0", GenerateName:"calico-apiserver-7d9dcbc845-", Namespace:"calico-apiserver", SelfLink:"", UID:"fecfff6f-79d3-4090-96c1-83913da0527a", ResourceVersion:"1168", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 36, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d9dcbc845", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dc2bb6f1488e180aeb18054610bb6436630079b8cf6e25740c7d44b9fcd201d7", Pod:"calico-apiserver-7d9dcbc845-6nfvq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7d6c70dc994", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:37:26.461197 env[1324]: 2025-07-12 00:37:26.429 [INFO][5547] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" Jul 12 00:37:26.461197 env[1324]: 2025-07-12 00:37:26.429 [INFO][5547] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" iface="eth0" netns="" Jul 12 00:37:26.461197 env[1324]: 2025-07-12 00:37:26.429 [INFO][5547] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" Jul 12 00:37:26.461197 env[1324]: 2025-07-12 00:37:26.429 [INFO][5547] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" Jul 12 00:37:26.461197 env[1324]: 2025-07-12 00:37:26.447 [INFO][5555] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" HandleID="k8s-pod-network.37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" Workload="localhost-k8s-calico--apiserver--7d9dcbc845--6nfvq-eth0" Jul 12 00:37:26.461197 env[1324]: 2025-07-12 00:37:26.447 [INFO][5555] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:37:26.461197 env[1324]: 2025-07-12 00:37:26.447 [INFO][5555] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:37:26.461197 env[1324]: 2025-07-12 00:37:26.456 [WARNING][5555] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" HandleID="k8s-pod-network.37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" Workload="localhost-k8s-calico--apiserver--7d9dcbc845--6nfvq-eth0" Jul 12 00:37:26.461197 env[1324]: 2025-07-12 00:37:26.456 [INFO][5555] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" HandleID="k8s-pod-network.37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" Workload="localhost-k8s-calico--apiserver--7d9dcbc845--6nfvq-eth0" Jul 12 00:37:26.461197 env[1324]: 2025-07-12 00:37:26.457 [INFO][5555] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:37:26.461197 env[1324]: 2025-07-12 00:37:26.459 [INFO][5547] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61" Jul 12 00:37:26.461827 env[1324]: time="2025-07-12T00:37:26.461230495Z" level=info msg="TearDown network for sandbox \"37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61\" successfully" Jul 12 00:37:26.464364 env[1324]: time="2025-07-12T00:37:26.464334888Z" level=info msg="RemovePodSandbox \"37fda3a90d4896663f54e6435fd693f8fa8ca41162504032657617d53c7cbd61\" returns successfully" Jul 12 00:37:26.464825 env[1324]: time="2025-07-12T00:37:26.464797157Z" level=info msg="StopPodSandbox for \"0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc\"" Jul 12 00:37:26.528099 env[1324]: 2025-07-12 00:37:26.496 [WARNING][5573] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f46b5b9d6--92dnp-eth0", GenerateName:"calico-kube-controllers-7f46b5b9d6-", Namespace:"calico-system", SelfLink:"", UID:"8eb4ca6c-aaff-4fe8-9f0e-771c33b4200d", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 36, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f46b5b9d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"515cd1ebaa213931794fb390e47af7448adb46df8bca52119c4de102ff5d30a7", Pod:"calico-kube-controllers-7f46b5b9d6-92dnp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calife389d88381", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:37:26.528099 env[1324]: 2025-07-12 00:37:26.497 [INFO][5573] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" Jul 12 00:37:26.528099 env[1324]: 2025-07-12 00:37:26.497 [INFO][5573] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" iface="eth0" netns="" Jul 12 00:37:26.528099 env[1324]: 2025-07-12 00:37:26.497 [INFO][5573] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" Jul 12 00:37:26.528099 env[1324]: 2025-07-12 00:37:26.497 [INFO][5573] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" Jul 12 00:37:26.528099 env[1324]: 2025-07-12 00:37:26.514 [INFO][5582] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" HandleID="k8s-pod-network.0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" Workload="localhost-k8s-calico--kube--controllers--7f46b5b9d6--92dnp-eth0" Jul 12 00:37:26.528099 env[1324]: 2025-07-12 00:37:26.515 [INFO][5582] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:37:26.528099 env[1324]: 2025-07-12 00:37:26.515 [INFO][5582] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:37:26.528099 env[1324]: 2025-07-12 00:37:26.523 [WARNING][5582] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" HandleID="k8s-pod-network.0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" Workload="localhost-k8s-calico--kube--controllers--7f46b5b9d6--92dnp-eth0" Jul 12 00:37:26.528099 env[1324]: 2025-07-12 00:37:26.523 [INFO][5582] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" HandleID="k8s-pod-network.0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" Workload="localhost-k8s-calico--kube--controllers--7f46b5b9d6--92dnp-eth0" Jul 12 00:37:26.528099 env[1324]: 2025-07-12 00:37:26.524 [INFO][5582] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:37:26.528099 env[1324]: 2025-07-12 00:37:26.526 [INFO][5573] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" Jul 12 00:37:26.528099 env[1324]: time="2025-07-12T00:37:26.528076769Z" level=info msg="TearDown network for sandbox \"0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc\" successfully" Jul 12 00:37:26.528539 env[1324]: time="2025-07-12T00:37:26.528114131Z" level=info msg="StopPodSandbox for \"0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc\" returns successfully" Jul 12 00:37:26.529874 env[1324]: time="2025-07-12T00:37:26.529846439Z" level=info msg="RemovePodSandbox for \"0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc\"" Jul 12 00:37:26.529931 env[1324]: time="2025-07-12T00:37:26.529895962Z" level=info msg="Forcibly stopping sandbox \"0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc\"" Jul 12 00:37:26.593026 env[1324]: 2025-07-12 00:37:26.561 [WARNING][5600] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f46b5b9d6--92dnp-eth0", GenerateName:"calico-kube-controllers-7f46b5b9d6-", Namespace:"calico-system", SelfLink:"", UID:"8eb4ca6c-aaff-4fe8-9f0e-771c33b4200d", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 36, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f46b5b9d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"515cd1ebaa213931794fb390e47af7448adb46df8bca52119c4de102ff5d30a7", Pod:"calico-kube-controllers-7f46b5b9d6-92dnp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calife389d88381", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:37:26.593026 env[1324]: 2025-07-12 00:37:26.561 [INFO][5600] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" Jul 12 00:37:26.593026 env[1324]: 2025-07-12 00:37:26.561 [INFO][5600] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" iface="eth0" netns="" Jul 12 00:37:26.593026 env[1324]: 2025-07-12 00:37:26.561 [INFO][5600] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" Jul 12 00:37:26.593026 env[1324]: 2025-07-12 00:37:26.561 [INFO][5600] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" Jul 12 00:37:26.593026 env[1324]: 2025-07-12 00:37:26.578 [INFO][5611] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" HandleID="k8s-pod-network.0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" Workload="localhost-k8s-calico--kube--controllers--7f46b5b9d6--92dnp-eth0" Jul 12 00:37:26.593026 env[1324]: 2025-07-12 00:37:26.579 [INFO][5611] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:37:26.593026 env[1324]: 2025-07-12 00:37:26.579 [INFO][5611] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:37:26.593026 env[1324]: 2025-07-12 00:37:26.588 [WARNING][5611] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" HandleID="k8s-pod-network.0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" Workload="localhost-k8s-calico--kube--controllers--7f46b5b9d6--92dnp-eth0" Jul 12 00:37:26.593026 env[1324]: 2025-07-12 00:37:26.588 [INFO][5611] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" HandleID="k8s-pod-network.0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" Workload="localhost-k8s-calico--kube--controllers--7f46b5b9d6--92dnp-eth0" Jul 12 00:37:26.593026 env[1324]: 2025-07-12 00:37:26.589 [INFO][5611] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:37:26.593026 env[1324]: 2025-07-12 00:37:26.591 [INFO][5600] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc" Jul 12 00:37:26.593671 env[1324]: time="2025-07-12T00:37:26.593043526Z" level=info msg="TearDown network for sandbox \"0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc\" successfully" Jul 12 00:37:26.596199 env[1324]: time="2025-07-12T00:37:26.596169960Z" level=info msg="RemovePodSandbox \"0e7771ffc0ed81c5a48ea89f9eedafc84d7a3afa42de04050f6163a555186ecc\" returns successfully" Jul 12 00:37:26.881104 kubelet[2103]: I0712 00:37:26.881002 2103 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:37:26.922000 audit[5620]: NETFILTER_CFG table=filter:130 family=2 entries=8 op=nft_register_rule pid=5620 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:26.922000 audit[5620]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=fffff041da80 a2=0 a3=1 items=0 ppid=2213 pid=5620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:26.922000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:26.929000 audit[5620]: NETFILTER_CFG table=nat:131 family=2 entries=38 op=nft_register_chain pid=5620 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:26.929000 audit[5620]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=12772 a0=3 a1=fffff041da80 a2=0 a3=1 items=0 ppid=2213 pid=5620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:26.929000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:27.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.111:22-10.0.0.1:47624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:27.663812 systemd[1]: Started sshd@13-10.0.0.111:22-10.0.0.1:47624.service. Jul 12 00:37:27.665302 kernel: kauditd_printk_skb: 13 callbacks suppressed Jul 12 00:37:27.665370 kernel: audit: type=1130 audit(1752280647.663:477): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.111:22-10.0.0.1:47624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:27.723985 sshd[5621]: Accepted publickey for core from 10.0.0.1 port 47624 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:37:27.723000 audit[5621]: USER_ACCT pid=5621 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:27.727704 kernel: audit: type=1101 audit(1752280647.723:478): pid=5621 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:27.727753 kernel: audit: type=1103 audit(1752280647.727:479): pid=5621 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:27.727000 audit[5621]: CRED_ACQ pid=5621 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:27.728051 sshd[5621]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:37:27.731875 systemd-logind[1309]: New session 14 of user core. Jul 12 00:37:27.732542 systemd[1]: Started session-14.scope. Jul 12 00:37:27.739591 kernel: audit: type=1006 audit(1752280647.727:480): pid=5621 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jul 12 00:37:27.727000 audit[5621]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe8765ae0 a2=3 a3=1 items=0 ppid=1 pid=5621 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:27.742948 kernel: audit: type=1300 audit(1752280647.727:480): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe8765ae0 a2=3 a3=1 items=0 ppid=1 pid=5621 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:27.727000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:37:27.744051 kernel: audit: type=1327 audit(1752280647.727:480): proctitle=737368643A20636F7265205B707269765D Jul 12 00:37:27.744000 audit[5621]: USER_START pid=5621 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:27.745000 audit[5624]: CRED_ACQ pid=5624 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:27.752004 kernel: audit: type=1105 audit(1752280647.744:481): pid=5621 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:27.752093 kernel: audit: type=1103 audit(1752280647.745:482): pid=5624 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:27.908241 sshd[5621]: pam_unix(sshd:session): session closed for user core Jul 12 00:37:27.908000 audit[5621]: USER_END pid=5621 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:27.910825 systemd-logind[1309]: Session 14 logged out. Waiting for processes to exit. Jul 12 00:37:27.910992 systemd[1]: sshd@13-10.0.0.111:22-10.0.0.1:47624.service: Deactivated successfully. Jul 12 00:37:27.911837 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 00:37:27.912215 systemd-logind[1309]: Removed session 14. Jul 12 00:37:27.908000 audit[5621]: CRED_DISP pid=5621 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:27.916068 kernel: audit: type=1106 audit(1752280647.908:483): pid=5621 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:27.916143 kernel: audit: type=1104 audit(1752280647.908:484): pid=5621 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:27.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.111:22-10.0.0.1:47624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:28.440650 systemd[1]: run-containerd-runc-k8s.io-d166d55751e78e1e0db2e0d93257cf0b945cf55ee557b4ddec2a01ddc912e6af-runc.7nQGXT.mount: Deactivated successfully. Jul 12 00:37:32.912076 systemd[1]: Started sshd@14-10.0.0.111:22-10.0.0.1:57110.service. Jul 12 00:37:32.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.111:22-10.0.0.1:57110 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:32.913057 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 12 00:37:32.913121 kernel: audit: type=1130 audit(1752280652.910:486): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.111:22-10.0.0.1:57110 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:32.958000 audit[5666]: USER_ACCT pid=5666 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:32.960555 sshd[5666]: Accepted publickey for core from 10.0.0.1 port 57110 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:37:32.962274 sshd[5666]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:37:32.960000 audit[5666]: CRED_ACQ pid=5666 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:32.966940 kernel: audit: type=1101 audit(1752280652.958:487): pid=5666 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:32.967000 kernel: audit: type=1103 audit(1752280652.960:488): pid=5666 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:32.969083 kernel: audit: type=1006 audit(1752280652.960:489): pid=5666 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jul 12 00:37:32.960000 audit[5666]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffea841480 a2=3 a3=1 items=0 ppid=1 pid=5666 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:32.970772 systemd-logind[1309]: New session 15 of user core. Jul 12 00:37:32.971016 systemd[1]: Started session-15.scope. Jul 12 00:37:32.975887 kernel: audit: type=1300 audit(1752280652.960:489): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffea841480 a2=3 a3=1 items=0 ppid=1 pid=5666 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:32.975994 kernel: audit: type=1327 audit(1752280652.960:489): proctitle=737368643A20636F7265205B707269765D Jul 12 00:37:32.960000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:37:32.986276 kernel: audit: type=1105 audit(1752280652.978:490): pid=5666 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:32.978000 audit[5666]: USER_START pid=5666 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:32.984000 audit[5669]: CRED_ACQ pid=5669 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:32.992550 kernel: audit: type=1103 audit(1752280652.984:491): pid=5669 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:33.147584 sshd[5666]: pam_unix(sshd:session): session closed for user core Jul 12 00:37:33.147000 audit[5666]: USER_END pid=5666 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:33.151149 systemd-logind[1309]: Session 15 logged out. Waiting for processes to exit. Jul 12 00:37:33.151691 systemd[1]: sshd@14-10.0.0.111:22-10.0.0.1:57110.service: Deactivated successfully. Jul 12 00:37:33.148000 audit[5666]: CRED_DISP pid=5666 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:33.152548 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 00:37:33.153350 systemd-logind[1309]: Removed session 15. Jul 12 00:37:33.159458 kernel: audit: type=1106 audit(1752280653.147:492): pid=5666 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:33.159542 kernel: audit: type=1104 audit(1752280653.148:493): pid=5666 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:33.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.111:22-10.0.0.1:57110 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:38.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.111:22-10.0.0.1:57126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:38.150294 systemd[1]: Started sshd@15-10.0.0.111:22-10.0.0.1:57126.service. Jul 12 00:37:38.154060 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 12 00:37:38.154134 kernel: audit: type=1130 audit(1752280658.150:495): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.111:22-10.0.0.1:57126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:38.198000 audit[5704]: USER_ACCT pid=5704 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:38.198642 sshd[5704]: Accepted publickey for core from 10.0.0.1 port 57126 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:37:38.200291 sshd[5704]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:37:38.199000 audit[5704]: CRED_ACQ pid=5704 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:38.205038 kernel: audit: type=1101 audit(1752280658.198:496): pid=5704 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:38.205866 kernel: audit: type=1103 audit(1752280658.199:497): pid=5704 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:38.205897 kernel: audit: type=1006 audit(1752280658.199:498): pid=5704 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jul 12 00:37:38.205165 systemd-logind[1309]: New session 16 of user core. Jul 12 00:37:38.207578 kernel: audit: type=1300 audit(1752280658.199:498): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc6fec4c0 a2=3 a3=1 items=0 ppid=1 pid=5704 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:38.199000 audit[5704]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc6fec4c0 a2=3 a3=1 items=0 ppid=1 pid=5704 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:38.206014 systemd[1]: Started session-16.scope. Jul 12 00:37:38.210837 kernel: audit: type=1327 audit(1752280658.199:498): proctitle=737368643A20636F7265205B707269765D Jul 12 00:37:38.199000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:37:38.209000 audit[5704]: USER_START pid=5704 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:38.215699 kernel: audit: type=1105 audit(1752280658.209:499): pid=5704 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:38.210000 audit[5707]: CRED_ACQ pid=5707 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:38.218631 kernel: audit: type=1103 audit(1752280658.210:500): pid=5707 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:38.341644 sshd[5704]: pam_unix(sshd:session): session closed for user core Jul 12 00:37:38.342000 audit[5704]: USER_END pid=5704 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:38.344357 systemd[1]: Started sshd@16-10.0.0.111:22-10.0.0.1:57140.service. Jul 12 00:37:38.342000 audit[5704]: CRED_DISP pid=5704 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:38.346519 systemd-logind[1309]: Session 16 logged out. Waiting for processes to exit. Jul 12 00:37:38.346693 systemd[1]: sshd@15-10.0.0.111:22-10.0.0.1:57126.service: Deactivated successfully. Jul 12 00:37:38.347600 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 00:37:38.347999 systemd-logind[1309]: Removed session 16. Jul 12 00:37:38.349660 kernel: audit: type=1106 audit(1752280658.342:501): pid=5704 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:38.349740 kernel: audit: type=1104 audit(1752280658.342:502): pid=5704 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:38.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.111:22-10.0.0.1:57140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:38.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.111:22-10.0.0.1:57126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:38.392000 audit[5716]: USER_ACCT pid=5716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:38.393090 sshd[5716]: Accepted publickey for core from 10.0.0.1 port 57140 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:37:38.393000 audit[5716]: CRED_ACQ pid=5716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:38.394000 audit[5716]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc1dce3e0 a2=3 a3=1 items=0 ppid=1 pid=5716 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:38.394000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:37:38.394700 sshd[5716]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:37:38.397935 systemd-logind[1309]: New session 17 of user core. Jul 12 00:37:38.398739 systemd[1]: Started session-17.scope. Jul 12 00:37:38.402000 audit[5716]: USER_START pid=5716 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:38.404000 audit[5721]: CRED_ACQ pid=5721 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:38.615112 sshd[5716]: pam_unix(sshd:session): session closed for user core Jul 12 00:37:38.615000 audit[5716]: USER_END pid=5716 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:38.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.111:22-10.0.0.1:57150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:38.616000 audit[5716]: CRED_DISP pid=5716 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:38.616425 systemd[1]: Started sshd@17-10.0.0.111:22-10.0.0.1:57150.service. Jul 12 00:37:38.621100 systemd[1]: sshd@16-10.0.0.111:22-10.0.0.1:57140.service: Deactivated successfully. Jul 12 00:37:38.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.111:22-10.0.0.1:57140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:38.622225 systemd-logind[1309]: Session 17 logged out. Waiting for processes to exit. Jul 12 00:37:38.622268 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 00:37:38.623105 systemd-logind[1309]: Removed session 17. Jul 12 00:37:38.664000 audit[5728]: USER_ACCT pid=5728 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:38.664998 sshd[5728]: Accepted publickey for core from 10.0.0.1 port 57150 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:37:38.666000 audit[5728]: CRED_ACQ pid=5728 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:38.666000 audit[5728]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe5e83c10 a2=3 a3=1 items=0 ppid=1 pid=5728 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:38.666000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:37:38.667224 sshd[5728]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:37:38.671066 systemd-logind[1309]: New session 18 of user core. Jul 12 00:37:38.671931 systemd[1]: Started session-18.scope. Jul 12 00:37:38.674000 audit[5728]: USER_START pid=5728 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:38.676000 audit[5733]: CRED_ACQ pid=5733 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:40.326000 audit[5746]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=5746 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:40.326000 audit[5746]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11944 a0=3 a1=ffffe5511c40 a2=0 a3=1 items=0 ppid=2213 pid=5746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:40.326000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:40.333872 sshd[5728]: pam_unix(sshd:session): session closed for user core Jul 12 00:37:40.333000 audit[5746]: NETFILTER_CFG table=nat:133 family=2 entries=26 op=nft_register_rule pid=5746 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:40.333000 audit[5746]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8076 a0=3 a1=ffffe5511c40 a2=0 a3=1 items=0 ppid=2213 pid=5746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:40.333000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:40.336356 systemd[1]: Started sshd@18-10.0.0.111:22-10.0.0.1:57162.service. Jul 12 00:37:40.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.111:22-10.0.0.1:57162 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:40.336000 audit[5728]: USER_END pid=5728 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:40.336000 audit[5728]: CRED_DISP pid=5728 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:40.338253 systemd[1]: sshd@17-10.0.0.111:22-10.0.0.1:57150.service: Deactivated successfully. Jul 12 00:37:40.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.111:22-10.0.0.1:57150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:40.339233 systemd-logind[1309]: Session 18 logged out. Waiting for processes to exit. Jul 12 00:37:40.339245 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 00:37:40.341636 systemd-logind[1309]: Removed session 18. Jul 12 00:37:40.357000 audit[5751]: NETFILTER_CFG table=filter:134 family=2 entries=32 op=nft_register_rule pid=5751 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:40.357000 audit[5751]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11944 a0=3 a1=ffffe26e83d0 a2=0 a3=1 items=0 ppid=2213 pid=5751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:40.357000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:40.362000 audit[5751]: NETFILTER_CFG table=nat:135 family=2 entries=26 op=nft_register_rule pid=5751 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:40.362000 audit[5751]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8076 a0=3 a1=ffffe26e83d0 a2=0 a3=1 items=0 ppid=2213 pid=5751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:40.362000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:40.387000 audit[5747]: USER_ACCT pid=5747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:40.388571 sshd[5747]: Accepted publickey for core from 10.0.0.1 port 57162 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:37:40.389000 audit[5747]: CRED_ACQ pid=5747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:40.389000 audit[5747]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff4d0ff20 a2=3 a3=1 items=0 ppid=1 pid=5747 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:40.389000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:37:40.390377 sshd[5747]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:37:40.396136 systemd-logind[1309]: New session 19 of user core. Jul 12 00:37:40.396963 systemd[1]: Started session-19.scope. Jul 12 00:37:40.400000 audit[5747]: USER_START pid=5747 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:40.402000 audit[5754]: CRED_ACQ pid=5754 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:40.894333 sshd[5747]: pam_unix(sshd:session): session closed for user core Jul 12 00:37:40.894000 audit[5747]: USER_END pid=5747 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:40.894000 audit[5747]: CRED_DISP pid=5747 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:40.895607 systemd[1]: Started sshd@19-10.0.0.111:22-10.0.0.1:57170.service. Jul 12 00:37:40.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.111:22-10.0.0.1:57170 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:40.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.111:22-10.0.0.1:57162 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:40.898363 systemd[1]: sshd@18-10.0.0.111:22-10.0.0.1:57162.service: Deactivated successfully. Jul 12 00:37:40.899495 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 00:37:40.903494 systemd-logind[1309]: Session 19 logged out. Waiting for processes to exit. Jul 12 00:37:40.905970 systemd-logind[1309]: Removed session 19. Jul 12 00:37:40.947000 audit[5762]: USER_ACCT pid=5762 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:40.948118 sshd[5762]: Accepted publickey for core from 10.0.0.1 port 57170 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:37:40.949000 audit[5762]: CRED_ACQ pid=5762 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:40.949000 audit[5762]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffce1447a0 a2=3 a3=1 items=0 ppid=1 pid=5762 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:40.949000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:37:40.949867 sshd[5762]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:37:40.953597 systemd-logind[1309]: New session 20 of user core. Jul 12 00:37:40.954280 systemd[1]: Started session-20.scope. Jul 12 00:37:40.957000 audit[5762]: USER_START pid=5762 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:40.959000 audit[5767]: CRED_ACQ pid=5767 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:41.067873 sshd[5762]: pam_unix(sshd:session): session closed for user core Jul 12 00:37:41.068000 audit[5762]: USER_END pid=5762 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:41.068000 audit[5762]: CRED_DISP pid=5762 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:41.070896 systemd[1]: sshd@19-10.0.0.111:22-10.0.0.1:57170.service: Deactivated successfully. Jul 12 00:37:41.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.111:22-10.0.0.1:57170 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:41.071908 systemd-logind[1309]: Session 20 logged out. Waiting for processes to exit. Jul 12 00:37:41.071976 systemd[1]: session-20.scope: Deactivated successfully. Jul 12 00:37:41.072673 systemd-logind[1309]: Removed session 20. Jul 12 00:37:42.906306 systemd[1]: run-containerd-runc-k8s.io-91cace5e737b247a4afc2cde5e5915e43dcdd8d8e553229a17eada8168cc8855-runc.g6REkV.mount: Deactivated successfully. Jul 12 00:37:45.892000 audit[5799]: NETFILTER_CFG table=filter:136 family=2 entries=20 op=nft_register_rule pid=5799 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:45.894680 kernel: kauditd_printk_skb: 57 callbacks suppressed Jul 12 00:37:45.894767 kernel: audit: type=1325 audit(1752280665.892:544): table=filter:136 family=2 entries=20 op=nft_register_rule pid=5799 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:45.892000 audit[5799]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffc75b7500 a2=0 a3=1 items=0 ppid=2213 pid=5799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:45.901131 kernel: audit: type=1300 audit(1752280665.892:544): arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffc75b7500 a2=0 a3=1 items=0 ppid=2213 pid=5799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:45.901261 kernel: audit: type=1327 audit(1752280665.892:544): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:45.892000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:45.906000 audit[5799]: NETFILTER_CFG table=nat:137 family=2 entries=110 op=nft_register_chain pid=5799 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:45.906000 audit[5799]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=50988 a0=3 a1=ffffc75b7500 a2=0 a3=1 items=0 ppid=2213 pid=5799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:45.914994 kernel: audit: type=1325 audit(1752280665.906:545): table=nat:137 family=2 entries=110 op=nft_register_chain pid=5799 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 12 00:37:45.915068 kernel: audit: type=1300 audit(1752280665.906:545): arch=c00000b7 syscall=211 success=yes exit=50988 a0=3 a1=ffffc75b7500 a2=0 a3=1 items=0 ppid=2213 pid=5799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:45.906000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:45.917157 kernel: audit: type=1327 audit(1752280665.906:545): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 12 00:37:46.070543 systemd[1]: Started sshd@20-10.0.0.111:22-10.0.0.1:44592.service. Jul 12 00:37:46.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.111:22-10.0.0.1:44592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:46.074401 kernel: audit: type=1130 audit(1752280666.069:546): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.111:22-10.0.0.1:44592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:46.123000 audit[5801]: USER_ACCT pid=5801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:46.125394 sshd[5801]: Accepted publickey for core from 10.0.0.1 port 44592 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:37:46.126547 sshd[5801]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:37:46.124000 audit[5801]: CRED_ACQ pid=5801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:46.131484 kernel: audit: type=1101 audit(1752280666.123:547): pid=5801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:46.131558 kernel: audit: type=1103 audit(1752280666.124:548): pid=5801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:46.131597 kernel: audit: type=1006 audit(1752280666.124:549): pid=5801 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jul 12 00:37:46.124000 audit[5801]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe2c264b0 a2=3 a3=1 items=0 ppid=1 pid=5801 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:46.124000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:37:46.136015 systemd-logind[1309]: New session 21 of user core. Jul 12 00:37:46.136823 systemd[1]: Started session-21.scope. Jul 12 00:37:46.139000 audit[5801]: USER_START pid=5801 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:46.141000 audit[5804]: CRED_ACQ pid=5804 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:46.321199 sshd[5801]: pam_unix(sshd:session): session closed for user core Jul 12 00:37:46.322000 audit[5801]: USER_END pid=5801 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:46.322000 audit[5801]: CRED_DISP pid=5801 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:46.325432 systemd[1]: sshd@20-10.0.0.111:22-10.0.0.1:44592.service: Deactivated successfully. Jul 12 00:37:46.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.111:22-10.0.0.1:44592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:46.327035 systemd[1]: session-21.scope: Deactivated successfully. Jul 12 00:37:46.329051 systemd-logind[1309]: Session 21 logged out. Waiting for processes to exit. Jul 12 00:37:46.332713 systemd-logind[1309]: Removed session 21. Jul 12 00:37:47.405215 kubelet[2103]: E0712 00:37:47.405171 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:37:51.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.111:22-10.0.0.1:44608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:51.324112 systemd[1]: Started sshd@21-10.0.0.111:22-10.0.0.1:44608.service. Jul 12 00:37:51.324990 kernel: kauditd_printk_skb: 7 callbacks suppressed Jul 12 00:37:51.325052 kernel: audit: type=1130 audit(1752280671.322:555): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.111:22-10.0.0.1:44608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:51.366000 audit[5843]: USER_ACCT pid=5843 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:51.369039 sshd[5843]: Accepted publickey for core from 10.0.0.1 port 44608 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:37:51.369938 sshd[5843]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:37:51.368000 audit[5843]: CRED_ACQ pid=5843 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:51.374510 kernel: audit: type=1101 audit(1752280671.366:556): pid=5843 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:51.374586 kernel: audit: type=1103 audit(1752280671.368:557): pid=5843 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:51.376687 kernel: audit: type=1006 audit(1752280671.368:558): pid=5843 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jul 12 00:37:51.368000 audit[5843]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffee3cb8a0 a2=3 a3=1 items=0 ppid=1 pid=5843 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:51.379255 systemd-logind[1309]: New session 22 of user core. Jul 12 00:37:51.379597 systemd[1]: Started session-22.scope. Jul 12 00:37:51.380689 kernel: audit: type=1300 audit(1752280671.368:558): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffee3cb8a0 a2=3 a3=1 items=0 ppid=1 pid=5843 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:51.368000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:37:51.381971 kernel: audit: type=1327 audit(1752280671.368:558): proctitle=737368643A20636F7265205B707269765D Jul 12 00:37:51.383000 audit[5843]: USER_START pid=5843 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:51.387000 audit[5846]: CRED_ACQ pid=5846 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:51.392111 kernel: audit: type=1105 audit(1752280671.383:559): pid=5843 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:51.392185 kernel: audit: type=1103 audit(1752280671.387:560): pid=5846 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:51.509677 sshd[5843]: pam_unix(sshd:session): session closed for user core Jul 12 00:37:51.509000 audit[5843]: USER_END pid=5843 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:51.514914 systemd[1]: sshd@21-10.0.0.111:22-10.0.0.1:44608.service: Deactivated successfully. Jul 12 00:37:51.511000 audit[5843]: CRED_DISP pid=5843 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:51.519788 kernel: audit: type=1106 audit(1752280671.509:561): pid=5843 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:51.519885 kernel: audit: type=1104 audit(1752280671.511:562): pid=5843 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:51.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.111:22-10.0.0.1:44608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:51.520248 systemd[1]: session-22.scope: Deactivated successfully. Jul 12 00:37:51.520744 systemd-logind[1309]: Session 22 logged out. Waiting for processes to exit. Jul 12 00:37:51.521945 systemd-logind[1309]: Removed session 22. Jul 12 00:37:53.405995 kubelet[2103]: E0712 00:37:53.405957 2103 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:37:56.512154 systemd[1]: Started sshd@22-10.0.0.111:22-10.0.0.1:52862.service. Jul 12 00:37:56.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.111:22-10.0.0.1:52862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:56.515379 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 12 00:37:56.515480 kernel: audit: type=1130 audit(1752280676.510:564): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.111:22-10.0.0.1:52862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:37:56.561000 audit[5876]: USER_ACCT pid=5876 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:56.563212 sshd[5876]: Accepted publickey for core from 10.0.0.1 port 52862 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:37:56.564532 sshd[5876]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:37:56.562000 audit[5876]: CRED_ACQ pid=5876 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:56.569304 kernel: audit: type=1101 audit(1752280676.561:565): pid=5876 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:56.569374 kernel: audit: type=1103 audit(1752280676.562:566): pid=5876 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:56.569422 kernel: audit: type=1006 audit(1752280676.562:567): pid=5876 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jul 12 00:37:56.572336 kernel: audit: type=1300 audit(1752280676.562:567): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcd62cfb0 a2=3 a3=1 items=0 ppid=1 pid=5876 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:56.562000 audit[5876]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcd62cfb0 a2=3 a3=1 items=0 ppid=1 pid=5876 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:37:56.572174 systemd[1]: Started session-23.scope. Jul 12 00:37:56.573368 systemd-logind[1309]: New session 23 of user core. Jul 12 00:37:56.574685 kernel: audit: type=1327 audit(1752280676.562:567): proctitle=737368643A20636F7265205B707269765D Jul 12 00:37:56.562000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:37:56.577000 audit[5876]: USER_START pid=5876 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:56.578000 audit[5879]: CRED_ACQ pid=5879 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:56.584980 kernel: audit: type=1105 audit(1752280676.577:568): pid=5876 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:56.585033 kernel: audit: type=1103 audit(1752280676.578:569): pid=5879 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:56.733232 sshd[5876]: pam_unix(sshd:session): session closed for user core Jul 12 00:37:56.732000 audit[5876]: USER_END pid=5876 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:56.736522 systemd[1]: sshd@22-10.0.0.111:22-10.0.0.1:52862.service: Deactivated successfully. Jul 12 00:37:56.733000 audit[5876]: CRED_DISP pid=5876 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:56.738154 systemd[1]: session-23.scope: Deactivated successfully. Jul 12 00:37:56.738820 systemd-logind[1309]: Session 23 logged out. Waiting for processes to exit. Jul 12 00:37:56.739982 systemd-logind[1309]: Removed session 23. Jul 12 00:37:56.740998 kernel: audit: type=1106 audit(1752280676.732:570): pid=5876 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:56.741062 kernel: audit: type=1104 audit(1752280676.733:571): pid=5876 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:37:56.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.111:22-10.0.0.1:52862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:38:01.737108 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 12 00:38:01.737295 kernel: audit: type=1130 audit(1752280681.734:573): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.111:22-10.0.0.1:52864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:38:01.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.111:22-10.0.0.1:52864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:38:01.735930 systemd[1]: Started sshd@23-10.0.0.111:22-10.0.0.1:52864.service. Jul 12 00:38:01.785000 audit[5890]: USER_ACCT pid=5890 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:38:01.786876 sshd[5890]: Accepted publickey for core from 10.0.0.1 port 52864 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:38:01.788082 sshd[5890]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:38:01.786000 audit[5890]: CRED_ACQ pid=5890 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:38:01.792665 kernel: audit: type=1101 audit(1752280681.785:574): pid=5890 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:38:01.792738 kernel: audit: type=1103 audit(1752280681.786:575): pid=5890 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:38:01.792771 kernel: audit: type=1006 audit(1752280681.786:576): pid=5890 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jul 12 00:38:01.792510 systemd[1]: Started session-24.scope. Jul 12 00:38:01.793624 systemd-logind[1309]: New session 24 of user core. Jul 12 00:38:01.786000 audit[5890]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffebf02700 a2=3 a3=1 items=0 ppid=1 pid=5890 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:38:01.797327 kernel: audit: type=1300 audit(1752280681.786:576): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffebf02700 a2=3 a3=1 items=0 ppid=1 pid=5890 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:38:01.797407 kernel: audit: type=1327 audit(1752280681.786:576): proctitle=737368643A20636F7265205B707269765D Jul 12 00:38:01.786000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 12 00:38:01.796000 audit[5890]: USER_START pid=5890 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:38:01.801667 kernel: audit: type=1105 audit(1752280681.796:577): pid=5890 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:38:01.798000 audit[5893]: CRED_ACQ pid=5893 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:38:01.804491 kernel: audit: type=1103 audit(1752280681.798:578): pid=5893 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:38:01.963897 sshd[5890]: pam_unix(sshd:session): session closed for user core Jul 12 00:38:01.964000 audit[5890]: USER_END pid=5890 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:38:01.967127 systemd[1]: sshd@23-10.0.0.111:22-10.0.0.1:52864.service: Deactivated successfully. Jul 12 00:38:01.964000 audit[5890]: CRED_DISP pid=5890 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:38:01.968704 systemd[1]: session-24.scope: Deactivated successfully. Jul 12 00:38:01.969277 systemd-logind[1309]: Session 24 logged out. Waiting for processes to exit. Jul 12 00:38:01.970337 systemd-logind[1309]: Removed session 24. Jul 12 00:38:01.972574 kernel: audit: type=1106 audit(1752280681.964:579): pid=5890 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:38:01.972643 kernel: audit: type=1104 audit(1752280681.964:580): pid=5890 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 12 00:38:01.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.111:22-10.0.0.1:52864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'