Jul 10 00:35:05.730229 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 10 00:35:05.730249 kernel: Linux version 5.15.186-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Wed Jul 9 23:19:15 -00 2025 Jul 10 00:35:05.730257 kernel: efi: EFI v2.70 by EDK II Jul 10 00:35:05.730263 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Jul 10 00:35:05.730268 kernel: random: crng init done Jul 10 00:35:05.730274 kernel: ACPI: Early table checksum verification disabled Jul 10 00:35:05.730280 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Jul 10 00:35:05.730288 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 10 00:35:05.730293 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:35:05.730299 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:35:05.730304 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:35:05.730309 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:35:05.730315 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:35:05.730321 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:35:05.730336 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:35:05.730342 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:35:05.730348 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:35:05.730354 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 10 00:35:05.730360 kernel: NUMA: Failed to initialise from firmware Jul 10 00:35:05.730366 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:35:05.730372 kernel: NUMA: NODE_DATA [mem 0xdcb09900-0xdcb0efff] Jul 10 00:35:05.730378 kernel: Zone ranges: Jul 10 00:35:05.730383 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:35:05.730391 kernel: DMA32 empty Jul 10 00:35:05.730397 kernel: Normal empty Jul 10 00:35:05.730402 kernel: Movable zone start for each node Jul 10 00:35:05.730408 kernel: Early memory node ranges Jul 10 00:35:05.730414 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Jul 10 00:35:05.730419 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Jul 10 00:35:05.730425 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Jul 10 00:35:05.730440 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Jul 10 00:35:05.730465 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Jul 10 00:35:05.730471 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Jul 10 00:35:05.730477 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Jul 10 00:35:05.730482 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:35:05.730490 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 10 00:35:05.730496 kernel: psci: probing for conduit method from ACPI. Jul 10 00:35:05.730502 kernel: psci: PSCIv1.1 detected in firmware. Jul 10 00:35:05.730507 kernel: psci: Using standard PSCI v0.2 function IDs Jul 10 00:35:05.730513 kernel: psci: Trusted OS migration not required Jul 10 00:35:05.730522 kernel: psci: SMC Calling Convention v1.1 Jul 10 00:35:05.730528 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 10 00:35:05.730535 kernel: ACPI: SRAT not present Jul 10 00:35:05.730542 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Jul 10 00:35:05.730548 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Jul 10 00:35:05.730554 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 10 00:35:05.730561 kernel: Detected PIPT I-cache on CPU0 Jul 10 00:35:05.730567 kernel: CPU features: detected: GIC system register CPU interface Jul 10 00:35:05.730573 kernel: CPU features: detected: Hardware dirty bit management Jul 10 00:35:05.730579 kernel: CPU features: detected: Spectre-v4 Jul 10 00:35:05.730585 kernel: CPU features: detected: Spectre-BHB Jul 10 00:35:05.730592 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 10 00:35:05.730599 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 10 00:35:05.730605 kernel: CPU features: detected: ARM erratum 1418040 Jul 10 00:35:05.730611 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 10 00:35:05.730617 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 10 00:35:05.730623 kernel: Policy zone: DMA Jul 10 00:35:05.730630 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=97626bbec4e8c603c151f40dbbae5fabba3cda417023e06335ea30183b36a27f Jul 10 00:35:05.730637 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 00:35:05.730643 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 00:35:05.730649 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 00:35:05.730656 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 00:35:05.730663 kernel: Memory: 2457332K/2572288K available (9792K kernel code, 2094K rwdata, 7588K rodata, 36416K init, 777K bss, 114956K reserved, 0K cma-reserved) Jul 10 00:35:05.730670 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 10 00:35:05.730676 kernel: trace event string verifier disabled Jul 10 00:35:05.730682 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 00:35:05.730689 kernel: rcu: RCU event tracing is enabled. Jul 10 00:35:05.730696 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 10 00:35:05.730702 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 00:35:05.730708 kernel: Tracing variant of Tasks RCU enabled. Jul 10 00:35:05.730714 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 00:35:05.730721 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 10 00:35:05.730727 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 10 00:35:05.730734 kernel: GICv3: 256 SPIs implemented Jul 10 00:35:05.730741 kernel: GICv3: 0 Extended SPIs implemented Jul 10 00:35:05.730747 kernel: GICv3: Distributor has no Range Selector support Jul 10 00:35:05.730753 kernel: Root IRQ handler: gic_handle_irq Jul 10 00:35:05.730759 kernel: GICv3: 16 PPIs implemented Jul 10 00:35:05.730765 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 10 00:35:05.730771 kernel: ACPI: SRAT not present Jul 10 00:35:05.730777 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 10 00:35:05.730784 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Jul 10 00:35:05.730790 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Jul 10 00:35:05.730797 kernel: GICv3: using LPI property table @0x00000000400d0000 Jul 10 00:35:05.730803 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Jul 10 00:35:05.730811 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:35:05.730817 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 10 00:35:05.730823 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 10 00:35:05.730829 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 10 00:35:05.730836 kernel: arm-pv: using stolen time PV Jul 10 00:35:05.730842 kernel: Console: colour dummy device 80x25 Jul 10 00:35:05.730849 kernel: ACPI: Core revision 20210730 Jul 10 00:35:05.730855 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 10 00:35:05.730862 kernel: pid_max: default: 32768 minimum: 301 Jul 10 00:35:05.730868 kernel: LSM: Security Framework initializing Jul 10 00:35:05.730876 kernel: SELinux: Initializing. Jul 10 00:35:05.731147 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:35:05.731162 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:35:05.731168 kernel: rcu: Hierarchical SRCU implementation. Jul 10 00:35:05.731175 kernel: Platform MSI: ITS@0x8080000 domain created Jul 10 00:35:05.731181 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 10 00:35:05.731187 kernel: Remapping and enabling EFI services. Jul 10 00:35:05.731194 kernel: smp: Bringing up secondary CPUs ... Jul 10 00:35:05.731201 kernel: Detected PIPT I-cache on CPU1 Jul 10 00:35:05.731210 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 10 00:35:05.731221 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Jul 10 00:35:05.731227 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:35:05.731234 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 10 00:35:05.731240 kernel: Detected PIPT I-cache on CPU2 Jul 10 00:35:05.731247 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 10 00:35:05.731253 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Jul 10 00:35:05.731260 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:35:05.731266 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 10 00:35:05.731272 kernel: Detected PIPT I-cache on CPU3 Jul 10 00:35:05.731280 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 10 00:35:05.731287 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Jul 10 00:35:05.731293 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:35:05.731300 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 10 00:35:05.731310 kernel: smp: Brought up 1 node, 4 CPUs Jul 10 00:35:05.731318 kernel: SMP: Total of 4 processors activated. Jul 10 00:35:05.731325 kernel: CPU features: detected: 32-bit EL0 Support Jul 10 00:35:05.731341 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 10 00:35:05.731348 kernel: CPU features: detected: Common not Private translations Jul 10 00:35:05.731355 kernel: CPU features: detected: CRC32 instructions Jul 10 00:35:05.731361 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 10 00:35:05.731368 kernel: CPU features: detected: LSE atomic instructions Jul 10 00:35:05.731376 kernel: CPU features: detected: Privileged Access Never Jul 10 00:35:05.731383 kernel: CPU features: detected: RAS Extension Support Jul 10 00:35:05.731390 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 10 00:35:05.731397 kernel: CPU: All CPU(s) started at EL1 Jul 10 00:35:05.731403 kernel: alternatives: patching kernel code Jul 10 00:35:05.731411 kernel: devtmpfs: initialized Jul 10 00:35:05.731418 kernel: KASLR enabled Jul 10 00:35:05.731425 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 00:35:05.731441 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 10 00:35:05.731448 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 00:35:05.731455 kernel: SMBIOS 3.0.0 present. Jul 10 00:35:05.731462 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Jul 10 00:35:05.731469 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 00:35:05.731476 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 10 00:35:05.731484 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 10 00:35:05.731491 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 10 00:35:05.731498 kernel: audit: initializing netlink subsys (disabled) Jul 10 00:35:05.731505 kernel: audit: type=2000 audit(0.036:1): state=initialized audit_enabled=0 res=1 Jul 10 00:35:05.731511 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 00:35:05.731518 kernel: cpuidle: using governor menu Jul 10 00:35:05.731525 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 10 00:35:05.731532 kernel: ASID allocator initialised with 32768 entries Jul 10 00:35:05.731538 kernel: ACPI: bus type PCI registered Jul 10 00:35:05.731546 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 00:35:05.731553 kernel: Serial: AMBA PL011 UART driver Jul 10 00:35:05.731560 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 00:35:05.731567 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 10 00:35:05.731573 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 00:35:05.731580 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 10 00:35:05.731587 kernel: cryptd: max_cpu_qlen set to 1000 Jul 10 00:35:05.731594 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 10 00:35:05.731600 kernel: ACPI: Added _OSI(Module Device) Jul 10 00:35:05.731608 kernel: ACPI: Added _OSI(Processor Device) Jul 10 00:35:05.731615 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 00:35:05.731622 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 10 00:35:05.731628 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 10 00:35:05.731635 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 10 00:35:05.731642 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 00:35:05.731648 kernel: ACPI: Interpreter enabled Jul 10 00:35:05.731655 kernel: ACPI: Using GIC for interrupt routing Jul 10 00:35:05.731662 kernel: ACPI: MCFG table detected, 1 entries Jul 10 00:35:05.731670 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 10 00:35:05.731677 kernel: printk: console [ttyAMA0] enabled Jul 10 00:35:05.731683 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 10 00:35:05.731808 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 10 00:35:05.731875 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 10 00:35:05.731936 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 10 00:35:05.731995 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 10 00:35:05.732056 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 10 00:35:05.732065 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 10 00:35:05.732072 kernel: PCI host bridge to bus 0000:00 Jul 10 00:35:05.732140 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 10 00:35:05.732194 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 10 00:35:05.732247 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 10 00:35:05.732300 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 10 00:35:05.732395 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 10 00:35:05.732486 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 10 00:35:05.732552 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 10 00:35:05.732613 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 10 00:35:05.732672 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 10 00:35:05.732751 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 10 00:35:05.732813 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 10 00:35:05.732877 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 10 00:35:05.732932 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 10 00:35:05.732986 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 10 00:35:05.733039 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 10 00:35:05.733048 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 10 00:35:05.733055 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 10 00:35:05.733062 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 10 00:35:05.733069 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 10 00:35:05.733077 kernel: iommu: Default domain type: Translated Jul 10 00:35:05.733084 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 10 00:35:05.733091 kernel: vgaarb: loaded Jul 10 00:35:05.733098 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 10 00:35:05.733105 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 10 00:35:05.733112 kernel: PTP clock support registered Jul 10 00:35:05.733119 kernel: Registered efivars operations Jul 10 00:35:05.733126 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 10 00:35:05.733132 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 00:35:05.733141 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 00:35:05.733147 kernel: pnp: PnP ACPI init Jul 10 00:35:05.733211 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 10 00:35:05.733221 kernel: pnp: PnP ACPI: found 1 devices Jul 10 00:35:05.733228 kernel: NET: Registered PF_INET protocol family Jul 10 00:35:05.733235 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 00:35:05.733242 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 00:35:05.733249 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 00:35:05.733257 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 00:35:05.733264 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 10 00:35:05.733271 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 00:35:05.733278 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:35:05.733285 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:35:05.733292 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 00:35:05.733298 kernel: PCI: CLS 0 bytes, default 64 Jul 10 00:35:05.733305 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 10 00:35:05.733312 kernel: kvm [1]: HYP mode not available Jul 10 00:35:05.733320 kernel: Initialise system trusted keyrings Jul 10 00:35:05.733480 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 00:35:05.733493 kernel: Key type asymmetric registered Jul 10 00:35:05.733500 kernel: Asymmetric key parser 'x509' registered Jul 10 00:35:05.733540 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 10 00:35:05.733548 kernel: io scheduler mq-deadline registered Jul 10 00:35:05.733555 kernel: io scheduler kyber registered Jul 10 00:35:05.733562 kernel: io scheduler bfq registered Jul 10 00:35:05.733569 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 10 00:35:05.733581 kernel: ACPI: button: Power Button [PWRB] Jul 10 00:35:05.733588 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 10 00:35:05.733685 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 10 00:35:05.733696 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 00:35:05.733703 kernel: thunder_xcv, ver 1.0 Jul 10 00:35:05.733709 kernel: thunder_bgx, ver 1.0 Jul 10 00:35:05.733716 kernel: nicpf, ver 1.0 Jul 10 00:35:05.733722 kernel: nicvf, ver 1.0 Jul 10 00:35:05.733794 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 10 00:35:05.733853 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-10T00:35:05 UTC (1752107705) Jul 10 00:35:05.733863 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 10 00:35:05.733870 kernel: NET: Registered PF_INET6 protocol family Jul 10 00:35:05.733876 kernel: Segment Routing with IPv6 Jul 10 00:35:05.733883 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 00:35:05.733890 kernel: NET: Registered PF_PACKET protocol family Jul 10 00:35:05.733897 kernel: Key type dns_resolver registered Jul 10 00:35:05.733903 kernel: registered taskstats version 1 Jul 10 00:35:05.733912 kernel: Loading compiled-in X.509 certificates Jul 10 00:35:05.733919 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.186-flatcar: 9e274a0dc4fc3d34232d90d226b034c4fe0e3e22' Jul 10 00:35:05.733926 kernel: Key type .fscrypt registered Jul 10 00:35:05.733932 kernel: Key type fscrypt-provisioning registered Jul 10 00:35:05.733939 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 00:35:05.733946 kernel: ima: Allocated hash algorithm: sha1 Jul 10 00:35:05.733953 kernel: ima: No architecture policies found Jul 10 00:35:05.733959 kernel: clk: Disabling unused clocks Jul 10 00:35:05.733966 kernel: Freeing unused kernel memory: 36416K Jul 10 00:35:05.733974 kernel: Run /init as init process Jul 10 00:35:05.733980 kernel: with arguments: Jul 10 00:35:05.733987 kernel: /init Jul 10 00:35:05.733994 kernel: with environment: Jul 10 00:35:05.734000 kernel: HOME=/ Jul 10 00:35:05.734007 kernel: TERM=linux Jul 10 00:35:05.734013 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 00:35:05.734022 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 10 00:35:05.734032 systemd[1]: Detected virtualization kvm. Jul 10 00:35:05.734040 systemd[1]: Detected architecture arm64. Jul 10 00:35:05.734046 systemd[1]: Running in initrd. Jul 10 00:35:05.734053 systemd[1]: No hostname configured, using default hostname. Jul 10 00:35:05.734060 systemd[1]: Hostname set to . Jul 10 00:35:05.734068 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:35:05.734075 systemd[1]: Queued start job for default target initrd.target. Jul 10 00:35:05.734082 systemd[1]: Started systemd-ask-password-console.path. Jul 10 00:35:05.734090 systemd[1]: Reached target cryptsetup.target. Jul 10 00:35:05.734098 systemd[1]: Reached target paths.target. Jul 10 00:35:05.734105 systemd[1]: Reached target slices.target. Jul 10 00:35:05.734112 systemd[1]: Reached target swap.target. Jul 10 00:35:05.734119 systemd[1]: Reached target timers.target. Jul 10 00:35:05.734126 systemd[1]: Listening on iscsid.socket. Jul 10 00:35:05.734133 systemd[1]: Listening on iscsiuio.socket. Jul 10 00:35:05.734142 systemd[1]: Listening on systemd-journald-audit.socket. Jul 10 00:35:05.734149 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 10 00:35:05.734157 systemd[1]: Listening on systemd-journald.socket. Jul 10 00:35:05.734164 systemd[1]: Listening on systemd-networkd.socket. Jul 10 00:35:05.734171 systemd[1]: Listening on systemd-udevd-control.socket. Jul 10 00:35:05.734178 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 10 00:35:05.734186 systemd[1]: Reached target sockets.target. Jul 10 00:35:05.734193 systemd[1]: Starting kmod-static-nodes.service... Jul 10 00:35:05.734200 systemd[1]: Finished network-cleanup.service. Jul 10 00:35:05.734208 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 00:35:05.734215 systemd[1]: Starting systemd-journald.service... Jul 10 00:35:05.734223 systemd[1]: Starting systemd-modules-load.service... Jul 10 00:35:05.734230 systemd[1]: Starting systemd-resolved.service... Jul 10 00:35:05.734237 systemd[1]: Starting systemd-vconsole-setup.service... Jul 10 00:35:05.734244 systemd[1]: Finished kmod-static-nodes.service. Jul 10 00:35:05.734252 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 00:35:05.734259 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 10 00:35:05.734266 systemd[1]: Finished systemd-vconsole-setup.service. Jul 10 00:35:05.734275 kernel: audit: type=1130 audit(1752107705.729:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:05.734282 systemd[1]: Starting dracut-cmdline-ask.service... Jul 10 00:35:05.734293 systemd-journald[290]: Journal started Jul 10 00:35:05.734354 systemd-journald[290]: Runtime Journal (/run/log/journal/c82acac781fe4258b3c2d2e5ca035766) is 6.0M, max 48.7M, 42.6M free. Jul 10 00:35:05.734388 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 10 00:35:05.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:05.726181 systemd-modules-load[291]: Inserted module 'overlay' Jul 10 00:35:05.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:05.739448 kernel: audit: type=1130 audit(1752107705.736:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:05.739467 systemd[1]: Started systemd-journald.service. Jul 10 00:35:05.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:05.749493 kernel: audit: type=1130 audit(1752107705.739:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:05.753448 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 00:35:05.757206 systemd-modules-load[291]: Inserted module 'br_netfilter' Jul 10 00:35:05.758100 kernel: Bridge firewalling registered Jul 10 00:35:05.760118 systemd-resolved[292]: Positive Trust Anchors: Jul 10 00:35:05.760133 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:35:05.760160 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 10 00:35:05.764280 systemd-resolved[292]: Defaulting to hostname 'linux'. Jul 10 00:35:05.767824 systemd[1]: Started systemd-resolved.service. Jul 10 00:35:05.772254 kernel: SCSI subsystem initialized Jul 10 00:35:05.772274 kernel: audit: type=1130 audit(1752107705.768:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:05.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:05.772785 systemd[1]: Reached target nss-lookup.target. Jul 10 00:35:05.774427 systemd[1]: Finished dracut-cmdline-ask.service. Jul 10 00:35:05.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:05.778444 kernel: audit: type=1130 audit(1752107705.774:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:05.779202 systemd[1]: Starting dracut-cmdline.service... Jul 10 00:35:05.783396 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 00:35:05.783414 kernel: device-mapper: uevent: version 1.0.3 Jul 10 00:35:05.783423 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 10 00:35:05.784469 systemd-modules-load[291]: Inserted module 'dm_multipath' Jul 10 00:35:05.785216 systemd[1]: Finished systemd-modules-load.service. Jul 10 00:35:05.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:05.789447 kernel: audit: type=1130 audit(1752107705.785:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:05.789387 systemd[1]: Starting systemd-sysctl.service... Jul 10 00:35:05.790896 dracut-cmdline[308]: dracut-dracut-053 Jul 10 00:35:05.793782 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=97626bbec4e8c603c151f40dbbae5fabba3cda417023e06335ea30183b36a27f Jul 10 00:35:05.797223 systemd[1]: Finished systemd-sysctl.service. Jul 10 00:35:05.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:05.802460 kernel: audit: type=1130 audit(1752107705.798:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:05.855455 kernel: Loading iSCSI transport class v2.0-870. Jul 10 00:35:05.867451 kernel: iscsi: registered transport (tcp) Jul 10 00:35:05.885457 kernel: iscsi: registered transport (qla4xxx) Jul 10 00:35:05.885510 kernel: QLogic iSCSI HBA Driver Jul 10 00:35:05.920866 systemd[1]: Finished dracut-cmdline.service. Jul 10 00:35:05.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:05.922673 systemd[1]: Starting dracut-pre-udev.service... Jul 10 00:35:05.925990 kernel: audit: type=1130 audit(1752107705.921:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:05.967454 kernel: raid6: neonx8 gen() 13769 MB/s Jul 10 00:35:05.984444 kernel: raid6: neonx8 xor() 10789 MB/s Jul 10 00:35:06.001448 kernel: raid6: neonx4 gen() 13557 MB/s Jul 10 00:35:06.018446 kernel: raid6: neonx4 xor() 11148 MB/s Jul 10 00:35:06.035445 kernel: raid6: neonx2 gen() 12958 MB/s Jul 10 00:35:06.052443 kernel: raid6: neonx2 xor() 10201 MB/s Jul 10 00:35:06.069446 kernel: raid6: neonx1 gen() 10615 MB/s Jul 10 00:35:06.086448 kernel: raid6: neonx1 xor() 8778 MB/s Jul 10 00:35:06.103442 kernel: raid6: int64x8 gen() 6273 MB/s Jul 10 00:35:06.120441 kernel: raid6: int64x8 xor() 3546 MB/s Jul 10 00:35:06.137443 kernel: raid6: int64x4 gen() 7226 MB/s Jul 10 00:35:06.154441 kernel: raid6: int64x4 xor() 3855 MB/s Jul 10 00:35:06.171445 kernel: raid6: int64x2 gen() 6156 MB/s Jul 10 00:35:06.188445 kernel: raid6: int64x2 xor() 3323 MB/s Jul 10 00:35:06.205450 kernel: raid6: int64x1 gen() 5043 MB/s Jul 10 00:35:06.222609 kernel: raid6: int64x1 xor() 2646 MB/s Jul 10 00:35:06.222626 kernel: raid6: using algorithm neonx8 gen() 13769 MB/s Jul 10 00:35:06.222635 kernel: raid6: .... xor() 10789 MB/s, rmw enabled Jul 10 00:35:06.223739 kernel: raid6: using neon recovery algorithm Jul 10 00:35:06.234449 kernel: xor: measuring software checksum speed Jul 10 00:35:06.234462 kernel: 8regs : 15710 MB/sec Jul 10 00:35:06.235607 kernel: 32regs : 20717 MB/sec Jul 10 00:35:06.236815 kernel: arm64_neon : 27738 MB/sec Jul 10 00:35:06.236825 kernel: xor: using function: arm64_neon (27738 MB/sec) Jul 10 00:35:06.314450 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 10 00:35:06.324891 systemd[1]: Finished dracut-pre-udev.service. Jul 10 00:35:06.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:06.328000 audit: BPF prog-id=7 op=LOAD Jul 10 00:35:06.328000 audit: BPF prog-id=8 op=LOAD Jul 10 00:35:06.329182 systemd[1]: Starting systemd-udevd.service... Jul 10 00:35:06.330604 kernel: audit: type=1130 audit(1752107706.325:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:06.341976 systemd-udevd[492]: Using default interface naming scheme 'v252'. Jul 10 00:35:06.345440 systemd[1]: Started systemd-udevd.service. Jul 10 00:35:06.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:06.347520 systemd[1]: Starting dracut-pre-trigger.service... Jul 10 00:35:06.359686 dracut-pre-trigger[500]: rd.md=0: removing MD RAID activation Jul 10 00:35:06.389155 systemd[1]: Finished dracut-pre-trigger.service. Jul 10 00:35:06.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:06.390800 systemd[1]: Starting systemd-udev-trigger.service... Jul 10 00:35:06.425215 systemd[1]: Finished systemd-udev-trigger.service. Jul 10 00:35:06.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:06.452453 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 10 00:35:06.458514 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 00:35:06.458528 kernel: GPT:9289727 != 19775487 Jul 10 00:35:06.458537 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 00:35:06.458546 kernel: GPT:9289727 != 19775487 Jul 10 00:35:06.458554 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 00:35:06.458562 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:35:06.470796 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 10 00:35:06.473085 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (549) Jul 10 00:35:06.478064 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 10 00:35:06.479111 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 10 00:35:06.484254 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 10 00:35:06.487692 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 10 00:35:06.489298 systemd[1]: Starting disk-uuid.service... Jul 10 00:35:06.494857 disk-uuid[562]: Primary Header is updated. Jul 10 00:35:06.494857 disk-uuid[562]: Secondary Entries is updated. Jul 10 00:35:06.494857 disk-uuid[562]: Secondary Header is updated. Jul 10 00:35:06.498462 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:35:06.506449 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:35:06.508446 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:35:07.509450 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:35:07.510167 disk-uuid[563]: The operation has completed successfully. Jul 10 00:35:07.529475 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 00:35:07.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:07.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:07.529571 systemd[1]: Finished disk-uuid.service. Jul 10 00:35:07.533547 systemd[1]: Starting verity-setup.service... Jul 10 00:35:07.552456 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 10 00:35:07.572275 systemd[1]: Found device dev-mapper-usr.device. Jul 10 00:35:07.573819 systemd[1]: Mounting sysusr-usr.mount... Jul 10 00:35:07.574617 systemd[1]: Finished verity-setup.service. Jul 10 00:35:07.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:07.628463 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 10 00:35:07.628965 systemd[1]: Mounted sysusr-usr.mount. Jul 10 00:35:07.629811 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 10 00:35:07.630530 systemd[1]: Starting ignition-setup.service... Jul 10 00:35:07.632794 systemd[1]: Starting parse-ip-for-networkd.service... Jul 10 00:35:07.641710 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:35:07.641744 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:35:07.641755 kernel: BTRFS info (device vda6): has skinny extents Jul 10 00:35:07.650505 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 10 00:35:07.658345 systemd[1]: Finished ignition-setup.service. Jul 10 00:35:07.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:07.659973 systemd[1]: Starting ignition-fetch-offline.service... Jul 10 00:35:07.725942 systemd[1]: Finished parse-ip-for-networkd.service. Jul 10 00:35:07.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:07.727000 audit: BPF prog-id=9 op=LOAD Jul 10 00:35:07.728323 systemd[1]: Starting systemd-networkd.service... Jul 10 00:35:07.756898 systemd-networkd[734]: lo: Link UP Jul 10 00:35:07.756910 systemd-networkd[734]: lo: Gained carrier Jul 10 00:35:07.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:07.757301 systemd-networkd[734]: Enumeration completed Jul 10 00:35:07.757502 systemd-networkd[734]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:35:07.757633 systemd[1]: Started systemd-networkd.service. Jul 10 00:35:07.758948 systemd-networkd[734]: eth0: Link UP Jul 10 00:35:07.758953 systemd-networkd[734]: eth0: Gained carrier Jul 10 00:35:07.759181 systemd[1]: Reached target network.target. Jul 10 00:35:07.761653 systemd[1]: Starting iscsiuio.service... Jul 10 00:35:07.780771 systemd[1]: Started iscsiuio.service. Jul 10 00:35:07.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:07.782533 systemd[1]: Starting iscsid.service... Jul 10 00:35:07.785916 systemd-networkd[734]: eth0: DHCPv4 address 10.0.0.85/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:35:07.787278 iscsid[744]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 10 00:35:07.787278 iscsid[744]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 10 00:35:07.787278 iscsid[744]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 10 00:35:07.787278 iscsid[744]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 10 00:35:07.787278 iscsid[744]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 10 00:35:07.787278 iscsid[744]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 10 00:35:07.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:07.789069 systemd[1]: Started iscsid.service. Jul 10 00:35:07.795648 systemd[1]: Starting dracut-initqueue.service... Jul 10 00:35:07.800268 ignition[650]: Ignition 2.14.0 Jul 10 00:35:07.800275 ignition[650]: Stage: fetch-offline Jul 10 00:35:07.800324 ignition[650]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:35:07.800333 ignition[650]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:35:07.800509 ignition[650]: parsed url from cmdline: "" Jul 10 00:35:07.800513 ignition[650]: no config URL provided Jul 10 00:35:07.800518 ignition[650]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:35:07.800525 ignition[650]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:35:07.800543 ignition[650]: op(1): [started] loading QEMU firmware config module Jul 10 00:35:07.800548 ignition[650]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 10 00:35:07.816764 ignition[650]: op(1): [finished] loading QEMU firmware config module Jul 10 00:35:07.816918 systemd[1]: Finished dracut-initqueue.service. Jul 10 00:35:07.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:07.818945 systemd[1]: Reached target remote-fs-pre.target. Jul 10 00:35:07.820397 systemd[1]: Reached target remote-cryptsetup.target. Jul 10 00:35:07.822495 systemd[1]: Reached target remote-fs.target. Jul 10 00:35:07.825159 systemd[1]: Starting dracut-pre-mount.service... Jul 10 00:35:07.835121 systemd[1]: Finished dracut-pre-mount.service. Jul 10 00:35:07.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:07.862827 ignition[650]: parsing config with SHA512: fc966c6d85af638faacc8e86ea3945396a5d2719a35118fe85e89fa5fa62882ae74b4797a7baa1a88d69970e35e8e4da979daf626d4b87e1a99b6b4c5df950d2 Jul 10 00:35:07.871334 unknown[650]: fetched base config from "system" Jul 10 00:35:07.871349 unknown[650]: fetched user config from "qemu" Jul 10 00:35:07.871952 ignition[650]: fetch-offline: fetch-offline passed Jul 10 00:35:07.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:07.873043 systemd[1]: Finished ignition-fetch-offline.service. Jul 10 00:35:07.872013 ignition[650]: Ignition finished successfully Jul 10 00:35:07.874543 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 10 00:35:07.875281 systemd[1]: Starting ignition-kargs.service... Jul 10 00:35:07.884727 ignition[760]: Ignition 2.14.0 Jul 10 00:35:07.884737 ignition[760]: Stage: kargs Jul 10 00:35:07.884834 ignition[760]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:35:07.884844 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:35:07.885743 ignition[760]: kargs: kargs passed Jul 10 00:35:07.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:07.887872 systemd[1]: Finished ignition-kargs.service. Jul 10 00:35:07.885786 ignition[760]: Ignition finished successfully Jul 10 00:35:07.890252 systemd[1]: Starting ignition-disks.service... Jul 10 00:35:07.897098 ignition[766]: Ignition 2.14.0 Jul 10 00:35:07.897109 ignition[766]: Stage: disks Jul 10 00:35:07.897209 ignition[766]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:35:07.899156 systemd[1]: Finished ignition-disks.service. Jul 10 00:35:07.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:07.897219 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:35:07.900647 systemd[1]: Reached target initrd-root-device.target. Jul 10 00:35:07.898185 ignition[766]: disks: disks passed Jul 10 00:35:07.901903 systemd[1]: Reached target local-fs-pre.target. Jul 10 00:35:07.898232 ignition[766]: Ignition finished successfully Jul 10 00:35:07.903493 systemd[1]: Reached target local-fs.target. Jul 10 00:35:07.904811 systemd[1]: Reached target sysinit.target. Jul 10 00:35:07.906008 systemd[1]: Reached target basic.target. Jul 10 00:35:07.908154 systemd[1]: Starting systemd-fsck-root.service... Jul 10 00:35:07.919005 systemd-fsck[774]: ROOT: clean, 619/553520 files, 56022/553472 blocks Jul 10 00:35:07.922091 systemd[1]: Finished systemd-fsck-root.service. Jul 10 00:35:07.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:07.925713 systemd[1]: Mounting sysroot.mount... Jul 10 00:35:07.936468 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 10 00:35:07.934892 systemd[1]: Mounted sysroot.mount. Jul 10 00:35:07.935665 systemd[1]: Reached target initrd-root-fs.target. Jul 10 00:35:07.937670 systemd[1]: Mounting sysroot-usr.mount... Jul 10 00:35:07.938979 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 10 00:35:07.939016 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 00:35:07.939041 systemd[1]: Reached target ignition-diskful.target. Jul 10 00:35:07.940842 systemd[1]: Mounted sysroot-usr.mount. Jul 10 00:35:07.942725 systemd[1]: Starting initrd-setup-root.service... Jul 10 00:35:07.946997 initrd-setup-root[784]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 00:35:07.950712 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory Jul 10 00:35:07.954813 initrd-setup-root[800]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 00:35:07.958349 initrd-setup-root[808]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 00:35:07.985260 systemd[1]: Finished initrd-setup-root.service. Jul 10 00:35:07.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:07.987070 systemd[1]: Starting ignition-mount.service... Jul 10 00:35:07.988460 systemd[1]: Starting sysroot-boot.service... Jul 10 00:35:07.992992 bash[825]: umount: /sysroot/usr/share/oem: not mounted. Jul 10 00:35:08.000732 ignition[827]: INFO : Ignition 2.14.0 Jul 10 00:35:08.000732 ignition[827]: INFO : Stage: mount Jul 10 00:35:08.002307 ignition[827]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:35:08.002307 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:35:08.004902 ignition[827]: INFO : mount: mount passed Jul 10 00:35:08.004902 ignition[827]: INFO : Ignition finished successfully Jul 10 00:35:08.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:08.004635 systemd[1]: Finished ignition-mount.service. Jul 10 00:35:08.009295 systemd[1]: Finished sysroot-boot.service. Jul 10 00:35:08.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:08.588210 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 10 00:35:08.595316 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (835) Jul 10 00:35:08.595353 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:35:08.595370 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:35:08.596596 kernel: BTRFS info (device vda6): has skinny extents Jul 10 00:35:08.599339 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 10 00:35:08.600934 systemd[1]: Starting ignition-files.service... Jul 10 00:35:08.614225 ignition[855]: INFO : Ignition 2.14.0 Jul 10 00:35:08.614225 ignition[855]: INFO : Stage: files Jul 10 00:35:08.615993 ignition[855]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:35:08.615993 ignition[855]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:35:08.615993 ignition[855]: DEBUG : files: compiled without relabeling support, skipping Jul 10 00:35:08.619924 ignition[855]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 00:35:08.619924 ignition[855]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 00:35:08.622801 ignition[855]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 00:35:08.622801 ignition[855]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 00:35:08.622801 ignition[855]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 00:35:08.622801 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 10 00:35:08.622801 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 10 00:35:08.622801 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 10 00:35:08.622801 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 10 00:35:08.620627 unknown[855]: wrote ssh authorized keys file for user: core Jul 10 00:35:08.755424 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 10 00:35:08.954415 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 10 00:35:08.954415 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 10 00:35:08.958253 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 00:35:08.958253 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:35:08.958253 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:35:08.958253 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:35:08.958253 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:35:08.958253 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:35:08.958253 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:35:08.958253 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:35:08.958253 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:35:08.958253 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 10 00:35:08.958253 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 10 00:35:08.958253 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 10 00:35:08.958253 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 10 00:35:09.475646 systemd-networkd[734]: eth0: Gained IPv6LL Jul 10 00:35:09.505540 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 10 00:35:10.257117 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 10 00:35:10.259642 ignition[855]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 10 00:35:10.261329 ignition[855]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 10 00:35:10.264472 ignition[855]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 10 00:35:10.264472 ignition[855]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 10 00:35:10.264472 ignition[855]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 10 00:35:10.264472 ignition[855]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:35:10.264472 ignition[855]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:35:10.264472 ignition[855]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 10 00:35:10.264472 ignition[855]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jul 10 00:35:10.264472 ignition[855]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:35:10.264472 ignition[855]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:35:10.264472 ignition[855]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jul 10 00:35:10.264472 ignition[855]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 10 00:35:10.264472 ignition[855]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 00:35:10.264472 ignition[855]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jul 10 00:35:10.264472 ignition[855]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:35:10.313447 ignition[855]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:35:10.313447 ignition[855]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jul 10 00:35:10.313447 ignition[855]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:35:10.313447 ignition[855]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:35:10.313447 ignition[855]: INFO : files: files passed Jul 10 00:35:10.313447 ignition[855]: INFO : Ignition finished successfully Jul 10 00:35:10.337359 kernel: kauditd_printk_skb: 23 callbacks suppressed Jul 10 00:35:10.337382 kernel: audit: type=1130 audit(1752107710.315:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.337401 kernel: audit: type=1130 audit(1752107710.328:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.337412 kernel: audit: type=1130 audit(1752107710.331:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.337421 kernel: audit: type=1131 audit(1752107710.331:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.313596 systemd[1]: Finished ignition-files.service. Jul 10 00:35:10.316912 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 10 00:35:10.323622 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 10 00:35:10.341916 initrd-setup-root-after-ignition[878]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 10 00:35:10.324301 systemd[1]: Starting ignition-quench.service... Jul 10 00:35:10.344241 initrd-setup-root-after-ignition[881]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:35:10.326783 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 10 00:35:10.328783 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 00:35:10.328865 systemd[1]: Finished ignition-quench.service. Jul 10 00:35:10.332350 systemd[1]: Reached target ignition-complete.target. Jul 10 00:35:10.338955 systemd[1]: Starting initrd-parse-etc.service... Jul 10 00:35:10.350997 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 00:35:10.351098 systemd[1]: Finished initrd-parse-etc.service. Jul 10 00:35:10.357849 kernel: audit: type=1130 audit(1752107710.352:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.357871 kernel: audit: type=1131 audit(1752107710.352:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.352866 systemd[1]: Reached target initrd-fs.target. Jul 10 00:35:10.358565 systemd[1]: Reached target initrd.target. Jul 10 00:35:10.359872 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 10 00:35:10.360625 systemd[1]: Starting dracut-pre-pivot.service... Jul 10 00:35:10.372276 systemd[1]: Finished dracut-pre-pivot.service. Jul 10 00:35:10.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.373930 systemd[1]: Starting initrd-cleanup.service... Jul 10 00:35:10.377304 kernel: audit: type=1130 audit(1752107710.372:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.381744 systemd[1]: Stopped target nss-lookup.target. Jul 10 00:35:10.382626 systemd[1]: Stopped target remote-cryptsetup.target. Jul 10 00:35:10.384061 systemd[1]: Stopped target timers.target. Jul 10 00:35:10.385380 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 00:35:10.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.385505 systemd[1]: Stopped dracut-pre-pivot.service. Jul 10 00:35:10.390785 kernel: audit: type=1131 audit(1752107710.386:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.386765 systemd[1]: Stopped target initrd.target. Jul 10 00:35:10.390263 systemd[1]: Stopped target basic.target. Jul 10 00:35:10.391497 systemd[1]: Stopped target ignition-complete.target. Jul 10 00:35:10.392784 systemd[1]: Stopped target ignition-diskful.target. Jul 10 00:35:10.394068 systemd[1]: Stopped target initrd-root-device.target. Jul 10 00:35:10.395573 systemd[1]: Stopped target remote-fs.target. Jul 10 00:35:10.397024 systemd[1]: Stopped target remote-fs-pre.target. Jul 10 00:35:10.398405 systemd[1]: Stopped target sysinit.target. Jul 10 00:35:10.399658 systemd[1]: Stopped target local-fs.target. Jul 10 00:35:10.401048 systemd[1]: Stopped target local-fs-pre.target. Jul 10 00:35:10.402318 systemd[1]: Stopped target swap.target. Jul 10 00:35:10.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.403515 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 00:35:10.412397 kernel: audit: type=1131 audit(1752107710.404:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.403628 systemd[1]: Stopped dracut-pre-mount.service. Jul 10 00:35:10.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.404924 systemd[1]: Stopped target cryptsetup.target. Jul 10 00:35:10.417661 kernel: audit: type=1131 audit(1752107710.412:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.408541 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 00:35:10.408642 systemd[1]: Stopped dracut-initqueue.service. Jul 10 00:35:10.413264 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 00:35:10.413374 systemd[1]: Stopped ignition-fetch-offline.service. Jul 10 00:35:10.417175 systemd[1]: Stopped target paths.target. Jul 10 00:35:10.418325 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 00:35:10.421467 systemd[1]: Stopped systemd-ask-password-console.path. Jul 10 00:35:10.422710 systemd[1]: Stopped target slices.target. Jul 10 00:35:10.424056 systemd[1]: Stopped target sockets.target. Jul 10 00:35:10.425620 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 00:35:10.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.425691 systemd[1]: Closed iscsid.socket. Jul 10 00:35:10.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.426924 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 00:35:10.426988 systemd[1]: Closed iscsiuio.socket. Jul 10 00:35:10.428115 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 00:35:10.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.428213 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 10 00:35:10.429590 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 00:35:10.429682 systemd[1]: Stopped ignition-files.service. Jul 10 00:35:10.431800 systemd[1]: Stopping ignition-mount.service... Jul 10 00:35:10.432932 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 00:35:10.439814 ignition[895]: INFO : Ignition 2.14.0 Jul 10 00:35:10.439814 ignition[895]: INFO : Stage: umount Jul 10 00:35:10.439814 ignition[895]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:35:10.439814 ignition[895]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:35:10.439814 ignition[895]: INFO : umount: umount passed Jul 10 00:35:10.439814 ignition[895]: INFO : Ignition finished successfully Jul 10 00:35:10.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.433054 systemd[1]: Stopped kmod-static-nodes.service. Jul 10 00:35:10.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.435136 systemd[1]: Stopping sysroot-boot.service... Jul 10 00:35:10.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.435845 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 00:35:10.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.435960 systemd[1]: Stopped systemd-udev-trigger.service. Jul 10 00:35:10.440809 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 00:35:10.440905 systemd[1]: Stopped dracut-pre-trigger.service. Jul 10 00:35:10.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.444264 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 00:35:10.444858 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 00:35:10.444946 systemd[1]: Stopped ignition-mount.service. Jul 10 00:35:10.446057 systemd[1]: Stopped target network.target. Jul 10 00:35:10.447236 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 00:35:10.447296 systemd[1]: Stopped ignition-disks.service. Jul 10 00:35:10.449340 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 00:35:10.449384 systemd[1]: Stopped ignition-kargs.service. Jul 10 00:35:10.450763 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 00:35:10.450806 systemd[1]: Stopped ignition-setup.service. Jul 10 00:35:10.452182 systemd[1]: Stopping systemd-networkd.service... Jul 10 00:35:10.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.453697 systemd[1]: Stopping systemd-resolved.service... Jul 10 00:35:10.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.455210 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 00:35:10.455318 systemd[1]: Finished initrd-cleanup.service. Jul 10 00:35:10.464259 systemd-networkd[734]: eth0: DHCPv6 lease lost Jul 10 00:35:10.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.474000 audit: BPF prog-id=6 op=UNLOAD Jul 10 00:35:10.465332 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 00:35:10.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.465450 systemd[1]: Stopped systemd-resolved.service. Jul 10 00:35:10.477000 audit: BPF prog-id=9 op=UNLOAD Jul 10 00:35:10.467241 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 00:35:10.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.467349 systemd[1]: Stopped systemd-networkd.service. Jul 10 00:35:10.468557 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 00:35:10.468585 systemd[1]: Closed systemd-networkd.socket. Jul 10 00:35:10.470988 systemd[1]: Stopping network-cleanup.service... Jul 10 00:35:10.471850 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 00:35:10.471906 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 10 00:35:10.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.474018 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:35:10.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.474062 systemd[1]: Stopped systemd-sysctl.service. Jul 10 00:35:10.476335 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 00:35:10.476378 systemd[1]: Stopped systemd-modules-load.service. Jul 10 00:35:10.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.478339 systemd[1]: Stopping systemd-udevd.service... Jul 10 00:35:10.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.484000 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 00:35:10.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.487074 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 00:35:10.487197 systemd[1]: Stopped network-cleanup.service. Jul 10 00:35:10.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.488494 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 00:35:10.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.488607 systemd[1]: Stopped systemd-udevd.service. Jul 10 00:35:10.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.489792 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 00:35:10.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:10.489825 systemd[1]: Closed systemd-udevd-control.socket. Jul 10 00:35:10.491160 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 00:35:10.491192 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 10 00:35:10.492512 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 00:35:10.492557 systemd[1]: Stopped dracut-pre-udev.service. Jul 10 00:35:10.494038 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 00:35:10.494078 systemd[1]: Stopped dracut-cmdline.service. Jul 10 00:35:10.495346 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:35:10.495386 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 10 00:35:10.513000 audit: BPF prog-id=8 op=UNLOAD Jul 10 00:35:10.513000 audit: BPF prog-id=7 op=UNLOAD Jul 10 00:35:10.497417 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 10 00:35:10.498319 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:35:10.515000 audit: BPF prog-id=5 op=UNLOAD Jul 10 00:35:10.515000 audit: BPF prog-id=4 op=UNLOAD Jul 10 00:35:10.515000 audit: BPF prog-id=3 op=UNLOAD Jul 10 00:35:10.498374 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 10 00:35:10.500254 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 00:35:10.500367 systemd[1]: Stopped sysroot-boot.service. Jul 10 00:35:10.501307 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 00:35:10.501348 systemd[1]: Stopped initrd-setup-root.service. Jul 10 00:35:10.502848 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 00:35:10.502940 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 10 00:35:10.504155 systemd[1]: Reached target initrd-switch-root.target. Jul 10 00:35:10.506401 systemd[1]: Starting initrd-switch-root.service... Jul 10 00:35:10.512125 systemd[1]: Switching root. Jul 10 00:35:10.530503 iscsid[744]: iscsid shutting down. Jul 10 00:35:10.531187 systemd-journald[290]: Journal stopped Jul 10 00:35:12.553112 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Jul 10 00:35:12.553182 kernel: SELinux: Class mctp_socket not defined in policy. Jul 10 00:35:12.553195 kernel: SELinux: Class anon_inode not defined in policy. Jul 10 00:35:12.553210 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 10 00:35:12.553220 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 00:35:12.553234 kernel: SELinux: policy capability open_perms=1 Jul 10 00:35:12.553245 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 00:35:12.553257 kernel: SELinux: policy capability always_check_network=0 Jul 10 00:35:12.553268 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 00:35:12.553287 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 00:35:12.553296 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 00:35:12.553310 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 00:35:12.553322 systemd[1]: Successfully loaded SELinux policy in 34.528ms. Jul 10 00:35:12.553339 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.151ms. Jul 10 00:35:12.553351 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 10 00:35:12.553364 systemd[1]: Detected virtualization kvm. Jul 10 00:35:12.553374 systemd[1]: Detected architecture arm64. Jul 10 00:35:12.553385 systemd[1]: Detected first boot. Jul 10 00:35:12.553396 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:35:12.553406 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 10 00:35:12.553420 systemd[1]: Populated /etc with preset unit settings. Jul 10 00:35:12.553443 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:35:12.553456 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:35:12.553467 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:35:12.553480 systemd[1]: Queued start job for default target multi-user.target. Jul 10 00:35:12.553492 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 10 00:35:12.553503 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 10 00:35:12.553515 systemd[1]: Created slice system-addon\x2drun.slice. Jul 10 00:35:12.553526 systemd[1]: Created slice system-getty.slice. Jul 10 00:35:12.553538 systemd[1]: Created slice system-modprobe.slice. Jul 10 00:35:12.553548 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 10 00:35:12.553559 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 10 00:35:12.553569 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 10 00:35:12.553580 systemd[1]: Created slice user.slice. Jul 10 00:35:12.553590 systemd[1]: Started systemd-ask-password-console.path. Jul 10 00:35:12.553600 systemd[1]: Started systemd-ask-password-wall.path. Jul 10 00:35:12.553611 systemd[1]: Set up automount boot.automount. Jul 10 00:35:12.553621 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 10 00:35:12.553633 systemd[1]: Reached target integritysetup.target. Jul 10 00:35:12.553643 systemd[1]: Reached target remote-cryptsetup.target. Jul 10 00:35:12.553654 systemd[1]: Reached target remote-fs.target. Jul 10 00:35:12.553664 systemd[1]: Reached target slices.target. Jul 10 00:35:12.553675 systemd[1]: Reached target swap.target. Jul 10 00:35:12.553686 systemd[1]: Reached target torcx.target. Jul 10 00:35:12.553696 systemd[1]: Reached target veritysetup.target. Jul 10 00:35:12.553706 systemd[1]: Listening on systemd-coredump.socket. Jul 10 00:35:12.553718 systemd[1]: Listening on systemd-initctl.socket. Jul 10 00:35:12.553728 systemd[1]: Listening on systemd-journald-audit.socket. Jul 10 00:35:12.553739 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 10 00:35:12.553750 systemd[1]: Listening on systemd-journald.socket. Jul 10 00:35:12.553760 systemd[1]: Listening on systemd-networkd.socket. Jul 10 00:35:12.553770 systemd[1]: Listening on systemd-udevd-control.socket. Jul 10 00:35:12.553780 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 10 00:35:12.553791 systemd[1]: Listening on systemd-userdbd.socket. Jul 10 00:35:12.553802 systemd[1]: Mounting dev-hugepages.mount... Jul 10 00:35:12.553813 systemd[1]: Mounting dev-mqueue.mount... Jul 10 00:35:12.553824 systemd[1]: Mounting media.mount... Jul 10 00:35:12.553835 systemd[1]: Mounting sys-kernel-debug.mount... Jul 10 00:35:12.554057 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 10 00:35:12.554084 systemd[1]: Mounting tmp.mount... Jul 10 00:35:12.554095 systemd[1]: Starting flatcar-tmpfiles.service... Jul 10 00:35:12.554107 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:35:12.554118 systemd[1]: Starting kmod-static-nodes.service... Jul 10 00:35:12.554128 systemd[1]: Starting modprobe@configfs.service... Jul 10 00:35:12.554139 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:35:12.554150 systemd[1]: Starting modprobe@drm.service... Jul 10 00:35:12.554163 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:35:12.554173 systemd[1]: Starting modprobe@fuse.service... Jul 10 00:35:12.554184 systemd[1]: Starting modprobe@loop.service... Jul 10 00:35:12.554195 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 00:35:12.554206 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 10 00:35:12.554218 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 10 00:35:12.554229 systemd[1]: Starting systemd-journald.service... Jul 10 00:35:12.554239 systemd[1]: Starting systemd-modules-load.service... Jul 10 00:35:12.554251 systemd[1]: Starting systemd-network-generator.service... Jul 10 00:35:12.554261 systemd[1]: Starting systemd-remount-fs.service... Jul 10 00:35:12.554284 kernel: fuse: init (API version 7.34) Jul 10 00:35:12.554297 systemd[1]: Starting systemd-udev-trigger.service... Jul 10 00:35:12.554308 systemd[1]: Mounted dev-hugepages.mount. Jul 10 00:35:12.554319 systemd[1]: Mounted dev-mqueue.mount. Jul 10 00:35:12.554329 kernel: loop: module loaded Jul 10 00:35:12.554339 systemd[1]: Mounted media.mount. Jul 10 00:35:12.554349 systemd[1]: Mounted sys-kernel-debug.mount. Jul 10 00:35:12.554361 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 10 00:35:12.554372 systemd[1]: Mounted tmp.mount. Jul 10 00:35:12.554383 systemd[1]: Finished kmod-static-nodes.service. Jul 10 00:35:12.554393 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 00:35:12.554405 systemd[1]: Finished modprobe@configfs.service. Jul 10 00:35:12.554415 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:35:12.554426 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:35:12.554457 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:35:12.554471 systemd-journald[1023]: Journal started Jul 10 00:35:12.554516 systemd-journald[1023]: Runtime Journal (/run/log/journal/c82acac781fe4258b3c2d2e5ca035766) is 6.0M, max 48.7M, 42.6M free. Jul 10 00:35:12.458000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 10 00:35:12.458000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 10 00:35:12.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:12.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:12.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:12.551000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 10 00:35:12.551000 audit[1023]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffd01dfc70 a2=4000 a3=1 items=0 ppid=1 pid=1023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:12.551000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 10 00:35:12.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:12.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:12.556025 systemd[1]: Finished modprobe@drm.service. Jul 10 00:35:12.557888 systemd[1]: Started systemd-journald.service. Jul 10 00:35:12.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:12.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:12.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:12.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:12.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:12.559289 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:35:12.559513 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:35:12.560672 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 00:35:12.560985 systemd[1]: Finished modprobe@fuse.service. Jul 10 00:35:12.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:12.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:12.562132 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:35:12.562535 systemd[1]: Finished modprobe@loop.service. Jul 10 00:35:12.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:12.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:12.563742 systemd[1]: Finished systemd-modules-load.service. Jul 10 00:35:12.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:12.564994 systemd[1]: Finished systemd-network-generator.service. Jul 10 00:35:12.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:12.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:12.566393 systemd[1]: Finished systemd-remount-fs.service. Jul 10 00:35:12.567618 systemd[1]: Reached target network-pre.target. Jul 10 00:35:12.569656 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 10 00:35:12.571516 systemd[1]: Mounting sys-kernel-config.mount... Jul 10 00:35:12.572233 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 00:35:12.574128 systemd[1]: Starting systemd-hwdb-update.service... Jul 10 00:35:12.576258 systemd[1]: Starting systemd-journal-flush.service... Jul 10 00:35:12.577121 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:35:12.578208 systemd[1]: Starting systemd-random-seed.service... Jul 10 00:35:12.579110 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:35:12.580215 systemd[1]: Starting systemd-sysctl.service... Jul 10 00:35:12.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:12.583838 systemd[1]: Finished flatcar-tmpfiles.service. Jul 10 00:35:12.584879 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 10 00:35:12.585822 systemd[1]: Mounted sys-kernel-config.mount. Jul 10 00:35:12.587941 systemd[1]: Starting systemd-sysusers.service... Jul 10 00:35:12.590808 systemd-journald[1023]: Time spent on flushing to /var/log/journal/c82acac781fe4258b3c2d2e5ca035766 is 12.291ms for 928 entries. Jul 10 00:35:12.590808 systemd-journald[1023]: System Journal (/var/log/journal/c82acac781fe4258b3c2d2e5ca035766) is 8.0M, max 195.6M, 187.6M free. Jul 10 00:35:12.615612 systemd-journald[1023]: Received client request to flush runtime journal. Jul 10 00:35:12.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:12.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:12.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:12.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:12.591365 systemd[1]: Finished systemd-udev-trigger.service. Jul 10 00:35:12.595193 systemd[1]: Starting systemd-udev-settle.service... Jul 10 00:35:12.616142 udevadm[1080]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 10 00:35:12.599643 systemd[1]: Finished systemd-random-seed.service. Jul 10 00:35:12.600941 systemd[1]: Finished systemd-sysctl.service. Jul 10 00:35:12.601854 systemd[1]: Reached target first-boot-complete.target. Jul 10 00:35:12.612024 systemd[1]: Finished systemd-sysusers.service. Jul 10 00:35:12.614164 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 10 00:35:12.616614 systemd[1]: Finished systemd-journal-flush.service. Jul 10 00:35:12.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:12.632154 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 10 00:35:12.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:12.952319 systemd[1]: Finished systemd-hwdb-update.service. Jul 10 00:35:12.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:12.954525 systemd[1]: Starting systemd-udevd.service... Jul 10 00:35:12.974015 systemd-udevd[1090]: Using default interface naming scheme 'v252'. Jul 10 00:35:12.990861 systemd[1]: Started systemd-udevd.service. Jul 10 00:35:12.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:12.993504 systemd[1]: Starting systemd-networkd.service... Jul 10 00:35:13.019850 systemd[1]: Starting systemd-userdbd.service... Jul 10 00:35:13.029564 systemd[1]: Found device dev-ttyAMA0.device. Jul 10 00:35:13.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.058574 systemd[1]: Started systemd-userdbd.service. Jul 10 00:35:13.062135 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 10 00:35:13.113310 systemd-networkd[1098]: lo: Link UP Jul 10 00:35:13.113318 systemd-networkd[1098]: lo: Gained carrier Jul 10 00:35:13.113693 systemd-networkd[1098]: Enumeration completed Jul 10 00:35:13.113802 systemd[1]: Started systemd-networkd.service. Jul 10 00:35:13.113802 systemd-networkd[1098]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:35:13.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.116825 systemd[1]: Finished systemd-udev-settle.service. Jul 10 00:35:13.116910 systemd-networkd[1098]: eth0: Link UP Jul 10 00:35:13.116919 systemd-networkd[1098]: eth0: Gained carrier Jul 10 00:35:13.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.118960 systemd[1]: Starting lvm2-activation-early.service... Jul 10 00:35:13.136560 systemd-networkd[1098]: eth0: DHCPv4 address 10.0.0.85/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:35:13.138678 lvm[1124]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 00:35:13.162343 systemd[1]: Finished lvm2-activation-early.service. Jul 10 00:35:13.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.163398 systemd[1]: Reached target cryptsetup.target. Jul 10 00:35:13.165352 systemd[1]: Starting lvm2-activation.service... Jul 10 00:35:13.168876 lvm[1126]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 00:35:13.202307 systemd[1]: Finished lvm2-activation.service. Jul 10 00:35:13.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.203280 systemd[1]: Reached target local-fs-pre.target. Jul 10 00:35:13.204158 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 00:35:13.204187 systemd[1]: Reached target local-fs.target. Jul 10 00:35:13.204984 systemd[1]: Reached target machines.target. Jul 10 00:35:13.206961 systemd[1]: Starting ldconfig.service... Jul 10 00:35:13.208069 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:35:13.208122 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:35:13.209226 systemd[1]: Starting systemd-boot-update.service... Jul 10 00:35:13.211170 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 10 00:35:13.213372 systemd[1]: Starting systemd-machine-id-commit.service... Jul 10 00:35:13.215323 systemd[1]: Starting systemd-sysext.service... Jul 10 00:35:13.216474 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1129 (bootctl) Jul 10 00:35:13.217508 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 10 00:35:13.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.225183 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 10 00:35:13.240087 systemd[1]: Unmounting usr-share-oem.mount... Jul 10 00:35:13.243974 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 10 00:35:13.244191 systemd[1]: Unmounted usr-share-oem.mount. Jul 10 00:35:13.285465 kernel: loop0: detected capacity change from 0 to 203944 Jul 10 00:35:13.284774 systemd[1]: Finished systemd-machine-id-commit.service. Jul 10 00:35:13.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.296352 systemd-fsck[1137]: fsck.fat 4.2 (2021-01-31) Jul 10 00:35:13.296352 systemd-fsck[1137]: /dev/vda1: 236 files, 117310/258078 clusters Jul 10 00:35:13.296168 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 10 00:35:13.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.298453 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 00:35:13.319452 kernel: loop1: detected capacity change from 0 to 203944 Jul 10 00:35:13.324286 (sd-sysext)[1148]: Using extensions 'kubernetes'. Jul 10 00:35:13.324647 (sd-sysext)[1148]: Merged extensions into '/usr'. Jul 10 00:35:13.341421 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:35:13.342742 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:35:13.344862 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:35:13.346813 systemd[1]: Starting modprobe@loop.service... Jul 10 00:35:13.347807 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:35:13.347966 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:35:13.348783 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:35:13.348931 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:35:13.350511 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:35:13.350642 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:35:13.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.352150 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:35:13.352314 systemd[1]: Finished modprobe@loop.service. Jul 10 00:35:13.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.353684 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:35:13.353782 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:35:13.386003 ldconfig[1128]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 00:35:13.389137 systemd[1]: Finished ldconfig.service. Jul 10 00:35:13.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.536220 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 00:35:13.538045 systemd[1]: Mounting boot.mount... Jul 10 00:35:13.539965 systemd[1]: Mounting usr-share-oem.mount... Jul 10 00:35:13.546252 systemd[1]: Mounted boot.mount. Jul 10 00:35:13.547249 systemd[1]: Mounted usr-share-oem.mount. Jul 10 00:35:13.549157 systemd[1]: Finished systemd-sysext.service. Jul 10 00:35:13.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.551178 systemd[1]: Starting ensure-sysext.service... Jul 10 00:35:13.553031 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 10 00:35:13.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.554300 systemd[1]: Finished systemd-boot-update.service. Jul 10 00:35:13.558648 systemd[1]: Reloading. Jul 10 00:35:13.561447 systemd-tmpfiles[1166]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 10 00:35:13.562533 systemd-tmpfiles[1166]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 00:35:13.563798 systemd-tmpfiles[1166]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 00:35:13.592996 /usr/lib/systemd/system-generators/torcx-generator[1186]: time="2025-07-10T00:35:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 10 00:35:13.593025 /usr/lib/systemd/system-generators/torcx-generator[1186]: time="2025-07-10T00:35:13Z" level=info msg="torcx already run" Jul 10 00:35:13.658356 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:35:13.658375 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:35:13.675890 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:35:13.726040 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 10 00:35:13.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.729958 systemd[1]: Starting audit-rules.service... Jul 10 00:35:13.731799 systemd[1]: Starting clean-ca-certificates.service... Jul 10 00:35:13.733880 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 10 00:35:13.736155 systemd[1]: Starting systemd-resolved.service... Jul 10 00:35:13.738226 systemd[1]: Starting systemd-timesyncd.service... Jul 10 00:35:13.740097 systemd[1]: Starting systemd-update-utmp.service... Jul 10 00:35:13.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.741507 systemd[1]: Finished clean-ca-certificates.service. Jul 10 00:35:13.745029 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:35:13.746710 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:35:13.747927 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:35:13.749000 audit[1239]: SYSTEM_BOOT pid=1239 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.749842 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:35:13.752127 systemd[1]: Starting modprobe@loop.service... Jul 10 00:35:13.752999 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:35:13.753129 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:35:13.753228 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:35:13.757668 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 10 00:35:13.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.759088 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:35:13.759238 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:35:13.760482 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:35:13.760614 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:35:13.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.761878 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:35:13.762022 systemd[1]: Finished modprobe@loop.service. Jul 10 00:35:13.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.766403 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:35:13.766670 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:35:13.767943 systemd[1]: Starting systemd-update-done.service... Jul 10 00:35:13.769423 systemd[1]: Finished systemd-update-utmp.service. Jul 10 00:35:13.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.772440 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:35:13.773592 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:35:13.775575 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:35:13.778411 systemd[1]: Starting modprobe@loop.service... Jul 10 00:35:13.779202 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:35:13.779361 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:35:13.779499 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:35:13.780545 systemd[1]: Finished systemd-update-done.service. Jul 10 00:35:13.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.781835 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:35:13.781979 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:35:13.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.783171 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:35:13.786799 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:35:13.788101 systemd[1]: Starting modprobe@drm.service... Jul 10 00:35:13.790011 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:35:13.791026 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:35:13.791181 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:35:13.796340 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 10 00:35:13.797411 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:35:13.798521 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:35:13.798711 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:35:13.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.799910 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:35:13.800052 systemd[1]: Finished modprobe@loop.service. Jul 10 00:35:13.801286 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:35:13.801425 systemd[1]: Finished modprobe@drm.service. Jul 10 00:35:13.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.802666 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:35:13.802841 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:35:13.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.805364 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:35:13.805449 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:35:13.806691 systemd[1]: Finished ensure-sysext.service. Jul 10 00:35:13.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:13.820461 augenrules[1278]: No rules Jul 10 00:35:13.819000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 10 00:35:13.819000 audit[1278]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffdb2aa90 a2=420 a3=0 items=0 ppid=1232 pid=1278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:13.819000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 10 00:35:13.821405 systemd[1]: Finished audit-rules.service. Jul 10 00:35:13.825505 systemd[1]: Started systemd-timesyncd.service. Jul 10 00:35:13.826157 systemd-timesyncd[1237]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 10 00:35:13.826213 systemd-timesyncd[1237]: Initial clock synchronization to Thu 2025-07-10 00:35:13.717346 UTC. Jul 10 00:35:13.826708 systemd[1]: Reached target time-set.target. Jul 10 00:35:13.829977 systemd-resolved[1236]: Positive Trust Anchors: Jul 10 00:35:13.831840 systemd-resolved[1236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:35:13.831871 systemd-resolved[1236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 10 00:35:13.843402 systemd-resolved[1236]: Defaulting to hostname 'linux'. Jul 10 00:35:13.846657 systemd[1]: Started systemd-resolved.service. Jul 10 00:35:13.847521 systemd[1]: Reached target network.target. Jul 10 00:35:13.848281 systemd[1]: Reached target nss-lookup.target. Jul 10 00:35:13.849098 systemd[1]: Reached target sysinit.target. Jul 10 00:35:13.849954 systemd[1]: Started motdgen.path. Jul 10 00:35:13.850712 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 10 00:35:13.851925 systemd[1]: Started logrotate.timer. Jul 10 00:35:13.852758 systemd[1]: Started mdadm.timer. Jul 10 00:35:13.853416 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 10 00:35:13.854279 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 00:35:13.854310 systemd[1]: Reached target paths.target. Jul 10 00:35:13.855093 systemd[1]: Reached target timers.target. Jul 10 00:35:13.856167 systemd[1]: Listening on dbus.socket. Jul 10 00:35:13.858103 systemd[1]: Starting docker.socket... Jul 10 00:35:13.859950 systemd[1]: Listening on sshd.socket. Jul 10 00:35:13.860851 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:35:13.861182 systemd[1]: Listening on docker.socket. Jul 10 00:35:13.861987 systemd[1]: Reached target sockets.target. Jul 10 00:35:13.862790 systemd[1]: Reached target basic.target. Jul 10 00:35:13.863735 systemd[1]: System is tainted: cgroupsv1 Jul 10 00:35:13.863788 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 10 00:35:13.863808 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 10 00:35:13.864891 systemd[1]: Starting containerd.service... Jul 10 00:35:13.866756 systemd[1]: Starting dbus.service... Jul 10 00:35:13.868575 systemd[1]: Starting enable-oem-cloudinit.service... Jul 10 00:35:13.870699 systemd[1]: Starting extend-filesystems.service... Jul 10 00:35:13.871663 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 10 00:35:13.872953 systemd[1]: Starting motdgen.service... Jul 10 00:35:13.875164 systemd[1]: Starting prepare-helm.service... Jul 10 00:35:13.876459 jq[1290]: false Jul 10 00:35:13.877874 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 10 00:35:13.880187 systemd[1]: Starting sshd-keygen.service... Jul 10 00:35:13.882967 systemd[1]: Starting systemd-logind.service... Jul 10 00:35:13.884214 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:35:13.884307 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 00:35:13.885763 systemd[1]: Starting update-engine.service... Jul 10 00:35:13.887904 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 10 00:35:13.893536 jq[1306]: true Jul 10 00:35:13.890917 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 00:35:13.891163 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 10 00:35:13.892155 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 00:35:13.892411 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 10 00:35:13.904171 extend-filesystems[1291]: Found loop1 Jul 10 00:35:13.904171 extend-filesystems[1291]: Found vda Jul 10 00:35:13.904171 extend-filesystems[1291]: Found vda1 Jul 10 00:35:13.904171 extend-filesystems[1291]: Found vda2 Jul 10 00:35:13.904171 extend-filesystems[1291]: Found vda3 Jul 10 00:35:13.904171 extend-filesystems[1291]: Found usr Jul 10 00:35:13.904171 extend-filesystems[1291]: Found vda4 Jul 10 00:35:13.904171 extend-filesystems[1291]: Found vda6 Jul 10 00:35:13.904171 extend-filesystems[1291]: Found vda7 Jul 10 00:35:13.904171 extend-filesystems[1291]: Found vda9 Jul 10 00:35:13.904171 extend-filesystems[1291]: Checking size of /dev/vda9 Jul 10 00:35:13.928456 jq[1314]: true Jul 10 00:35:13.909494 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 00:35:13.928611 tar[1313]: linux-arm64/helm Jul 10 00:35:13.909728 systemd[1]: Finished motdgen.service. Jul 10 00:35:13.934811 dbus-daemon[1289]: [system] SELinux support is enabled Jul 10 00:35:13.934973 systemd[1]: Started dbus.service. Jul 10 00:35:13.937481 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 00:35:13.937512 systemd[1]: Reached target system-config.target. Jul 10 00:35:13.938388 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 00:35:13.938405 systemd[1]: Reached target user-config.target. Jul 10 00:35:13.948503 extend-filesystems[1291]: Resized partition /dev/vda9 Jul 10 00:35:13.959150 extend-filesystems[1342]: resize2fs 1.46.5 (30-Dec-2021) Jul 10 00:35:13.954565 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 10 00:35:13.960913 bash[1339]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:35:13.974503 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 10 00:35:13.977470 update_engine[1305]: I0710 00:35:13.974328 1305 main.cc:92] Flatcar Update Engine starting Jul 10 00:35:13.979063 systemd-logind[1302]: Watching system buttons on /dev/input/event0 (Power Button) Jul 10 00:35:13.980295 systemd-logind[1302]: New seat seat0. Jul 10 00:35:13.981288 systemd[1]: Started update-engine.service. Jul 10 00:35:13.982020 update_engine[1305]: I0710 00:35:13.981814 1305 update_check_scheduler.cc:74] Next update check in 2m11s Jul 10 00:35:13.984707 systemd[1]: Started locksmithd.service. Jul 10 00:35:13.988610 systemd[1]: Started systemd-logind.service. Jul 10 00:35:13.991144 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 10 00:35:14.004404 extend-filesystems[1342]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 10 00:35:14.004404 extend-filesystems[1342]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 10 00:35:14.004404 extend-filesystems[1342]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 10 00:35:14.002985 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 00:35:14.009165 extend-filesystems[1291]: Resized filesystem in /dev/vda9 Jul 10 00:35:14.003235 systemd[1]: Finished extend-filesystems.service. Jul 10 00:35:14.011044 env[1315]: time="2025-07-10T00:35:14.010643074Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 10 00:35:14.045575 env[1315]: time="2025-07-10T00:35:14.045301348Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 10 00:35:14.045696 env[1315]: time="2025-07-10T00:35:14.045672593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:35:14.048872 env[1315]: time="2025-07-10T00:35:14.047202985Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.186-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:35:14.048872 env[1315]: time="2025-07-10T00:35:14.047235182Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:35:14.048872 env[1315]: time="2025-07-10T00:35:14.047696505Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:35:14.048872 env[1315]: time="2025-07-10T00:35:14.047716746Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 10 00:35:14.048872 env[1315]: time="2025-07-10T00:35:14.047730240Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 10 00:35:14.048872 env[1315]: time="2025-07-10T00:35:14.047741091Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 10 00:35:14.048872 env[1315]: time="2025-07-10T00:35:14.047902191Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:35:14.048872 env[1315]: time="2025-07-10T00:35:14.048459669Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:35:14.048872 env[1315]: time="2025-07-10T00:35:14.048717595Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:35:14.048872 env[1315]: time="2025-07-10T00:35:14.048798836Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 10 00:35:14.049101 env[1315]: time="2025-07-10T00:35:14.048863662Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 10 00:35:14.049101 env[1315]: time="2025-07-10T00:35:14.048877472Z" level=info msg="metadata content store policy set" policy=shared Jul 10 00:35:14.053967 env[1315]: time="2025-07-10T00:35:14.052988387Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 10 00:35:14.053967 env[1315]: time="2025-07-10T00:35:14.053022043Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 10 00:35:14.053967 env[1315]: time="2025-07-10T00:35:14.053035498Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 10 00:35:14.053967 env[1315]: time="2025-07-10T00:35:14.053070811Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 10 00:35:14.053967 env[1315]: time="2025-07-10T00:35:14.053086002Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 10 00:35:14.053967 env[1315]: time="2025-07-10T00:35:14.053099930Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 10 00:35:14.053967 env[1315]: time="2025-07-10T00:35:14.053112516Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 10 00:35:14.053967 env[1315]: time="2025-07-10T00:35:14.053482222Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 10 00:35:14.053967 env[1315]: time="2025-07-10T00:35:14.053503292Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 10 00:35:14.053967 env[1315]: time="2025-07-10T00:35:14.053515997Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 10 00:35:14.053967 env[1315]: time="2025-07-10T00:35:14.053535843Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 10 00:35:14.053967 env[1315]: time="2025-07-10T00:35:14.053552218Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 10 00:35:14.053967 env[1315]: time="2025-07-10T00:35:14.053664313Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 10 00:35:14.053967 env[1315]: time="2025-07-10T00:35:14.053734387Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 10 00:35:14.054269 env[1315]: time="2025-07-10T00:35:14.054013186Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 10 00:35:14.054269 env[1315]: time="2025-07-10T00:35:14.054039385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 10 00:35:14.054269 env[1315]: time="2025-07-10T00:35:14.054051498Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 10 00:35:14.054269 env[1315]: time="2025-07-10T00:35:14.054153216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 10 00:35:14.054269 env[1315]: time="2025-07-10T00:35:14.054166868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 10 00:35:14.054269 env[1315]: time="2025-07-10T00:35:14.054178192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 10 00:35:14.054269 env[1315]: time="2025-07-10T00:35:14.054189477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 10 00:35:14.054269 env[1315]: time="2025-07-10T00:35:14.054200722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 10 00:35:14.054269 env[1315]: time="2025-07-10T00:35:14.054211651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 10 00:35:14.054269 env[1315]: time="2025-07-10T00:35:14.054222580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 10 00:35:14.054269 env[1315]: time="2025-07-10T00:35:14.054240257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 10 00:35:14.054269 env[1315]: time="2025-07-10T00:35:14.054252567Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 10 00:35:14.054511 env[1315]: time="2025-07-10T00:35:14.054364623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 10 00:35:14.054511 env[1315]: time="2025-07-10T00:35:14.054380919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 10 00:35:14.054511 env[1315]: time="2025-07-10T00:35:14.054392479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 10 00:35:14.054511 env[1315]: time="2025-07-10T00:35:14.054403685Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 10 00:35:14.054511 env[1315]: time="2025-07-10T00:35:14.054425662Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 10 00:35:14.054511 env[1315]: time="2025-07-10T00:35:14.054457464Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 10 00:35:14.054511 env[1315]: time="2025-07-10T00:35:14.054481690Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 10 00:35:14.054511 env[1315]: time="2025-07-10T00:35:14.054514084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 10 00:35:14.055404 env[1315]: time="2025-07-10T00:35:14.054811900Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 10 00:35:14.055404 env[1315]: time="2025-07-10T00:35:14.054884421Z" level=info msg="Connect containerd service" Jul 10 00:35:14.055404 env[1315]: time="2025-07-10T00:35:14.054918748Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 10 00:35:14.057889 env[1315]: time="2025-07-10T00:35:14.055994445Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:35:14.057889 env[1315]: time="2025-07-10T00:35:14.056280701Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 00:35:14.057889 env[1315]: time="2025-07-10T00:35:14.056315936Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 00:35:14.056473 systemd[1]: Started containerd.service. Jul 10 00:35:14.060186 env[1315]: time="2025-07-10T00:35:14.058022974Z" level=info msg="Start subscribing containerd event" Jul 10 00:35:14.060186 env[1315]: time="2025-07-10T00:35:14.058078094Z" level=info msg="Start recovering state" Jul 10 00:35:14.060186 env[1315]: time="2025-07-10T00:35:14.058133964Z" level=info msg="Start event monitor" Jul 10 00:35:14.060186 env[1315]: time="2025-07-10T00:35:14.058151049Z" level=info msg="Start snapshots syncer" Jul 10 00:35:14.060186 env[1315]: time="2025-07-10T00:35:14.058160755Z" level=info msg="Start cni network conf syncer for default" Jul 10 00:35:14.060186 env[1315]: time="2025-07-10T00:35:14.058168055Z" level=info msg="Start streaming server" Jul 10 00:35:14.060534 env[1315]: time="2025-07-10T00:35:14.060406254Z" level=info msg="containerd successfully booted in 0.050752s" Jul 10 00:35:14.068871 locksmithd[1349]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 00:35:14.285773 tar[1313]: linux-arm64/LICENSE Jul 10 00:35:14.285938 tar[1313]: linux-arm64/README.md Jul 10 00:35:14.290005 systemd[1]: Finished prepare-helm.service. Jul 10 00:35:14.467535 systemd-networkd[1098]: eth0: Gained IPv6LL Jul 10 00:35:14.469568 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 10 00:35:14.470897 systemd[1]: Reached target network-online.target. Jul 10 00:35:14.473870 systemd[1]: Starting kubelet.service... Jul 10 00:35:14.946541 sshd_keygen[1324]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 00:35:14.966471 systemd[1]: Finished sshd-keygen.service. Jul 10 00:35:14.969027 systemd[1]: Starting issuegen.service... Jul 10 00:35:14.974916 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 00:35:14.975132 systemd[1]: Finished issuegen.service. Jul 10 00:35:14.978281 systemd[1]: Starting systemd-user-sessions.service... Jul 10 00:35:14.988731 systemd[1]: Finished systemd-user-sessions.service. Jul 10 00:35:14.991142 systemd[1]: Started getty@tty1.service. Jul 10 00:35:14.993406 systemd[1]: Started serial-getty@ttyAMA0.service. Jul 10 00:35:14.994610 systemd[1]: Reached target getty.target. Jul 10 00:35:15.115303 systemd[1]: Started kubelet.service. Jul 10 00:35:15.116721 systemd[1]: Reached target multi-user.target. Jul 10 00:35:15.118932 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 10 00:35:15.127035 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 10 00:35:15.127256 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 10 00:35:15.128360 systemd[1]: Startup finished in 5.624s (kernel) + 4.539s (userspace) = 10.163s. Jul 10 00:35:15.629729 kubelet[1390]: E0710 00:35:15.629674 1390 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:35:15.631733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:35:15.632155 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:35:18.097555 systemd[1]: Created slice system-sshd.slice. Jul 10 00:35:18.098778 systemd[1]: Started sshd@0-10.0.0.85:22-10.0.0.1:35182.service. Jul 10 00:35:18.154183 sshd[1400]: Accepted publickey for core from 10.0.0.1 port 35182 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:35:18.159306 sshd[1400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:18.179346 systemd-logind[1302]: New session 1 of user core. Jul 10 00:35:18.180421 systemd[1]: Created slice user-500.slice. Jul 10 00:35:18.181652 systemd[1]: Starting user-runtime-dir@500.service... Jul 10 00:35:18.195954 systemd[1]: Finished user-runtime-dir@500.service. Jul 10 00:35:18.197503 systemd[1]: Starting user@500.service... Jul 10 00:35:18.200522 (systemd)[1405]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:18.267387 systemd[1405]: Queued start job for default target default.target. Jul 10 00:35:18.267658 systemd[1405]: Reached target paths.target. Jul 10 00:35:18.267673 systemd[1405]: Reached target sockets.target. Jul 10 00:35:18.267684 systemd[1405]: Reached target timers.target. Jul 10 00:35:18.267693 systemd[1405]: Reached target basic.target. Jul 10 00:35:18.267736 systemd[1405]: Reached target default.target. Jul 10 00:35:18.267758 systemd[1405]: Startup finished in 61ms. Jul 10 00:35:18.268003 systemd[1]: Started user@500.service. Jul 10 00:35:18.269123 systemd[1]: Started session-1.scope. Jul 10 00:35:18.319105 systemd[1]: Started sshd@1-10.0.0.85:22-10.0.0.1:35194.service. Jul 10 00:35:18.363531 sshd[1414]: Accepted publickey for core from 10.0.0.1 port 35194 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:35:18.365084 sshd[1414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:18.369232 systemd[1]: Started session-2.scope. Jul 10 00:35:18.369626 systemd-logind[1302]: New session 2 of user core. Jul 10 00:35:18.423727 sshd[1414]: pam_unix(sshd:session): session closed for user core Jul 10 00:35:18.426296 systemd[1]: Started sshd@2-10.0.0.85:22-10.0.0.1:35208.service. Jul 10 00:35:18.428247 systemd-logind[1302]: Session 2 logged out. Waiting for processes to exit. Jul 10 00:35:18.429102 systemd[1]: sshd@1-10.0.0.85:22-10.0.0.1:35194.service: Deactivated successfully. Jul 10 00:35:18.429881 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 00:35:18.430542 systemd-logind[1302]: Removed session 2. Jul 10 00:35:18.468417 sshd[1419]: Accepted publickey for core from 10.0.0.1 port 35208 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:35:18.469574 sshd[1419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:18.473656 systemd-logind[1302]: New session 3 of user core. Jul 10 00:35:18.474413 systemd[1]: Started session-3.scope. Jul 10 00:35:18.525039 sshd[1419]: pam_unix(sshd:session): session closed for user core Jul 10 00:35:18.527159 systemd[1]: Started sshd@3-10.0.0.85:22-10.0.0.1:35224.service. Jul 10 00:35:18.527633 systemd[1]: sshd@2-10.0.0.85:22-10.0.0.1:35208.service: Deactivated successfully. Jul 10 00:35:18.528606 systemd[1]: session-3.scope: Deactivated successfully. Jul 10 00:35:18.528609 systemd-logind[1302]: Session 3 logged out. Waiting for processes to exit. Jul 10 00:35:18.529709 systemd-logind[1302]: Removed session 3. Jul 10 00:35:18.570208 sshd[1426]: Accepted publickey for core from 10.0.0.1 port 35224 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:35:18.571461 sshd[1426]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:18.574533 systemd-logind[1302]: New session 4 of user core. Jul 10 00:35:18.575342 systemd[1]: Started session-4.scope. Jul 10 00:35:18.628391 sshd[1426]: pam_unix(sshd:session): session closed for user core Jul 10 00:35:18.630524 systemd[1]: Started sshd@4-10.0.0.85:22-10.0.0.1:35228.service. Jul 10 00:35:18.631201 systemd[1]: sshd@3-10.0.0.85:22-10.0.0.1:35224.service: Deactivated successfully. Jul 10 00:35:18.632159 systemd-logind[1302]: Session 4 logged out. Waiting for processes to exit. Jul 10 00:35:18.632187 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 00:35:18.633027 systemd-logind[1302]: Removed session 4. Jul 10 00:35:18.673221 sshd[1433]: Accepted publickey for core from 10.0.0.1 port 35228 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:35:18.675141 sshd[1433]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:18.678463 systemd-logind[1302]: New session 5 of user core. Jul 10 00:35:18.679239 systemd[1]: Started session-5.scope. Jul 10 00:35:18.738110 sudo[1439]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 00:35:18.738334 sudo[1439]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 10 00:35:18.751399 dbus-daemon[1289]: avc: received setenforce notice (enforcing=1) Jul 10 00:35:18.751703 sudo[1439]: pam_unix(sudo:session): session closed for user root Jul 10 00:35:18.753556 sshd[1433]: pam_unix(sshd:session): session closed for user core Jul 10 00:35:18.756049 systemd[1]: Started sshd@5-10.0.0.85:22-10.0.0.1:35240.service. Jul 10 00:35:18.756872 systemd[1]: sshd@4-10.0.0.85:22-10.0.0.1:35228.service: Deactivated successfully. Jul 10 00:35:18.757763 systemd-logind[1302]: Session 5 logged out. Waiting for processes to exit. Jul 10 00:35:18.757828 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 00:35:18.759022 systemd-logind[1302]: Removed session 5. Jul 10 00:35:18.799178 sshd[1441]: Accepted publickey for core from 10.0.0.1 port 35240 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:35:18.800367 sshd[1441]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:18.803731 systemd-logind[1302]: New session 6 of user core. Jul 10 00:35:18.804545 systemd[1]: Started session-6.scope. Jul 10 00:35:18.861763 sudo[1448]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 00:35:18.861996 sudo[1448]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 10 00:35:18.866818 sudo[1448]: pam_unix(sudo:session): session closed for user root Jul 10 00:35:18.871721 sudo[1447]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 10 00:35:18.871941 sudo[1447]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 10 00:35:18.880643 systemd[1]: Stopping audit-rules.service... Jul 10 00:35:18.882000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 10 00:35:18.883467 kernel: kauditd_printk_skb: 119 callbacks suppressed Jul 10 00:35:18.883502 kernel: audit: type=1305 audit(1752107718.882:152): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 10 00:35:18.883794 auditctl[1451]: No rules Jul 10 00:35:18.883997 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:35:18.884222 systemd[1]: Stopped audit-rules.service. Jul 10 00:35:18.882000 audit[1451]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc0605800 a2=420 a3=0 items=0 ppid=1 pid=1451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:18.885846 systemd[1]: Starting audit-rules.service... Jul 10 00:35:18.889399 kernel: audit: type=1300 audit(1752107718.882:152): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc0605800 a2=420 a3=0 items=0 ppid=1 pid=1451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:18.889471 kernel: audit: type=1327 audit(1752107718.882:152): proctitle=2F7362696E2F617564697463746C002D44 Jul 10 00:35:18.882000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jul 10 00:35:18.891016 kernel: audit: type=1131 audit(1752107718.883:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:18.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:18.902229 augenrules[1469]: No rules Jul 10 00:35:18.903149 systemd[1]: Finished audit-rules.service. Jul 10 00:35:18.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:18.904376 sudo[1447]: pam_unix(sudo:session): session closed for user root Jul 10 00:35:18.903000 audit[1447]: USER_END pid=1447 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 10 00:35:18.906886 sshd[1441]: pam_unix(sshd:session): session closed for user core Jul 10 00:35:18.912581 kernel: audit: type=1130 audit(1752107718.902:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:18.912756 kernel: audit: type=1106 audit(1752107718.903:155): pid=1447 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 10 00:35:18.912792 kernel: audit: type=1104 audit(1752107718.904:156): pid=1447 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 10 00:35:18.912810 kernel: audit: type=1106 audit(1752107718.909:157): pid=1441 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:35:18.904000 audit[1447]: CRED_DISP pid=1447 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 10 00:35:18.909000 audit[1441]: USER_END pid=1441 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:35:18.912080 systemd[1]: Started sshd@6-10.0.0.85:22-10.0.0.1:35250.service. Jul 10 00:35:18.912872 systemd[1]: sshd@5-10.0.0.85:22-10.0.0.1:35240.service: Deactivated successfully. Jul 10 00:35:18.913577 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 00:35:18.915084 systemd-logind[1302]: Session 6 logged out. Waiting for processes to exit. Jul 10 00:35:18.918848 kernel: audit: type=1104 audit(1752107718.909:158): pid=1441 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:35:18.909000 audit[1441]: CRED_DISP pid=1441 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:35:18.919222 systemd-logind[1302]: Removed session 6. Jul 10 00:35:18.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.85:22-10.0.0.1:35250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:18.928945 kernel: audit: type=1130 audit(1752107718.911:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.85:22-10.0.0.1:35250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:18.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.85:22-10.0.0.1:35240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:18.961000 audit[1474]: USER_ACCT pid=1474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:35:18.963574 sshd[1474]: Accepted publickey for core from 10.0.0.1 port 35250 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:35:18.965826 sshd[1474]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:18.963000 audit[1474]: CRED_ACQ pid=1474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:35:18.963000 audit[1474]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff40a8cd0 a2=3 a3=1 items=0 ppid=1 pid=1474 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:18.963000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:35:18.971560 systemd[1]: Started session-7.scope. Jul 10 00:35:18.971760 systemd-logind[1302]: New session 7 of user core. Jul 10 00:35:18.974000 audit[1474]: USER_START pid=1474 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:35:18.975000 audit[1479]: CRED_ACQ pid=1479 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:35:19.023000 audit[1480]: USER_ACCT pid=1480 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 10 00:35:19.023749 sudo[1480]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 00:35:19.023000 audit[1480]: CRED_REFR pid=1480 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 10 00:35:19.023977 sudo[1480]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 10 00:35:19.028000 audit[1480]: USER_START pid=1480 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 10 00:35:19.088506 systemd[1]: Starting docker.service... Jul 10 00:35:19.204603 env[1492]: time="2025-07-10T00:35:19.204413139Z" level=info msg="Starting up" Jul 10 00:35:19.207217 env[1492]: time="2025-07-10T00:35:19.207166847Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 10 00:35:19.207336 env[1492]: time="2025-07-10T00:35:19.207321124Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 10 00:35:19.207444 env[1492]: time="2025-07-10T00:35:19.207408828Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 10 00:35:19.207502 env[1492]: time="2025-07-10T00:35:19.207488787Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 10 00:35:19.210360 env[1492]: time="2025-07-10T00:35:19.210331748Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 10 00:35:19.210360 env[1492]: time="2025-07-10T00:35:19.210358758Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 10 00:35:19.210462 env[1492]: time="2025-07-10T00:35:19.210375957Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 10 00:35:19.210462 env[1492]: time="2025-07-10T00:35:19.210388708Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 10 00:35:19.215800 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2358175495-merged.mount: Deactivated successfully. Jul 10 00:35:19.406505 env[1492]: time="2025-07-10T00:35:19.406465215Z" level=warning msg="Your kernel does not support cgroup blkio weight" Jul 10 00:35:19.406505 env[1492]: time="2025-07-10T00:35:19.406492662Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Jul 10 00:35:19.406697 env[1492]: time="2025-07-10T00:35:19.406622352Z" level=info msg="Loading containers: start." Jul 10 00:35:19.462000 audit[1526]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1526 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:19.462000 audit[1526]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffc90b6cc0 a2=0 a3=1 items=0 ppid=1492 pid=1526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:19.462000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jul 10 00:35:19.464000 audit[1528]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1528 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:19.464000 audit[1528]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffce47a4c0 a2=0 a3=1 items=0 ppid=1492 pid=1528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:19.464000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jul 10 00:35:19.466000 audit[1530]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1530 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:19.466000 audit[1530]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffdba201f0 a2=0 a3=1 items=0 ppid=1492 pid=1530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:19.466000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 10 00:35:19.468000 audit[1532]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1532 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:19.468000 audit[1532]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffdd2f4220 a2=0 a3=1 items=0 ppid=1492 pid=1532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:19.468000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 10 00:35:19.470000 audit[1534]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1534 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:19.470000 audit[1534]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd1cb34e0 a2=0 a3=1 items=0 ppid=1492 pid=1534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:19.470000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jul 10 00:35:19.493000 audit[1539]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1539 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:19.493000 audit[1539]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc5d0b4b0 a2=0 a3=1 items=0 ppid=1492 pid=1539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:19.493000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jul 10 00:35:19.499000 audit[1541]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1541 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:19.499000 audit[1541]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd0b3a540 a2=0 a3=1 items=0 ppid=1492 pid=1541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:19.499000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jul 10 00:35:19.501000 audit[1543]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1543 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:19.501000 audit[1543]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd752ae20 a2=0 a3=1 items=0 ppid=1492 pid=1543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:19.501000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jul 10 00:35:19.502000 audit[1545]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1545 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:19.502000 audit[1545]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffdf7ae040 a2=0 a3=1 items=0 ppid=1492 pid=1545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:19.502000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 10 00:35:19.508000 audit[1549]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1549 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:19.508000 audit[1549]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffd96fd480 a2=0 a3=1 items=0 ppid=1492 pid=1549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:19.508000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 10 00:35:19.516000 audit[1550]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1550 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:19.516000 audit[1550]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffffbf72e10 a2=0 a3=1 items=0 ppid=1492 pid=1550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:19.516000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 10 00:35:19.527459 kernel: Initializing XFRM netlink socket Jul 10 00:35:19.550266 env[1492]: time="2025-07-10T00:35:19.550190607Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 10 00:35:19.565000 audit[1558]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1558 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:19.565000 audit[1558]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=ffffd3aaadd0 a2=0 a3=1 items=0 ppid=1492 pid=1558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:19.565000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jul 10 00:35:19.580000 audit[1561]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1561 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:19.580000 audit[1561]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffc77c6360 a2=0 a3=1 items=0 ppid=1492 pid=1561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:19.580000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jul 10 00:35:19.583000 audit[1564]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1564 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:19.583000 audit[1564]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffee6fc510 a2=0 a3=1 items=0 ppid=1492 pid=1564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:19.583000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jul 10 00:35:19.585000 audit[1566]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1566 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:19.585000 audit[1566]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffd71b6780 a2=0 a3=1 items=0 ppid=1492 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:19.585000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jul 10 00:35:19.587000 audit[1568]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1568 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:19.587000 audit[1568]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=fffff2dc6ec0 a2=0 a3=1 items=0 ppid=1492 pid=1568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:19.587000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jul 10 00:35:19.589000 audit[1570]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1570 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:19.589000 audit[1570]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=fffff7dd5910 a2=0 a3=1 items=0 ppid=1492 pid=1570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:19.589000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jul 10 00:35:19.591000 audit[1572]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1572 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:19.591000 audit[1572]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=fffffa40bf30 a2=0 a3=1 items=0 ppid=1492 pid=1572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:19.591000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jul 10 00:35:19.597000 audit[1575]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1575 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:19.597000 audit[1575]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffef223370 a2=0 a3=1 items=0 ppid=1492 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:19.597000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jul 10 00:35:19.599000 audit[1577]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1577 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:19.599000 audit[1577]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=fffffc9cc9e0 a2=0 a3=1 items=0 ppid=1492 pid=1577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:19.599000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 10 00:35:19.601000 audit[1579]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1579 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:19.601000 audit[1579]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffe8000bd0 a2=0 a3=1 items=0 ppid=1492 pid=1579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:19.601000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 10 00:35:19.603000 audit[1581]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1581 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:19.603000 audit[1581]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffffbc621a0 a2=0 a3=1 items=0 ppid=1492 pid=1581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:19.603000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jul 10 00:35:19.604655 systemd-networkd[1098]: docker0: Link UP Jul 10 00:35:19.611000 audit[1585]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1585 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:19.611000 audit[1585]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff0836080 a2=0 a3=1 items=0 ppid=1492 pid=1585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:19.611000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 10 00:35:19.623000 audit[1586]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1586 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:19.623000 audit[1586]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd07d7320 a2=0 a3=1 items=0 ppid=1492 pid=1586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:19.623000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 10 00:35:19.623890 env[1492]: time="2025-07-10T00:35:19.623849135Z" level=info msg="Loading containers: done." Jul 10 00:35:19.646832 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4016287662-merged.mount: Deactivated successfully. Jul 10 00:35:19.653323 env[1492]: time="2025-07-10T00:35:19.653275233Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 00:35:19.653493 env[1492]: time="2025-07-10T00:35:19.653466887Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 10 00:35:19.653594 env[1492]: time="2025-07-10T00:35:19.653573777Z" level=info msg="Daemon has completed initialization" Jul 10 00:35:19.667617 systemd[1]: Started docker.service. Jul 10 00:35:19.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:19.673339 env[1492]: time="2025-07-10T00:35:19.673232684Z" level=info msg="API listen on /run/docker.sock" Jul 10 00:35:20.285483 env[1315]: time="2025-07-10T00:35:20.285438384Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 10 00:35:20.892262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1783093493.mount: Deactivated successfully. Jul 10 00:35:22.170880 env[1315]: time="2025-07-10T00:35:22.170824551Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:22.175034 env[1315]: time="2025-07-10T00:35:22.174995619Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:22.177051 env[1315]: time="2025-07-10T00:35:22.177019125Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:22.181202 env[1315]: time="2025-07-10T00:35:22.181172197Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:22.182010 env[1315]: time="2025-07-10T00:35:22.181980485Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 10 00:35:22.189917 env[1315]: time="2025-07-10T00:35:22.189888339Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 10 00:35:23.567638 env[1315]: time="2025-07-10T00:35:23.567554640Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:23.569331 env[1315]: time="2025-07-10T00:35:23.569305337Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:23.571861 env[1315]: time="2025-07-10T00:35:23.571825755Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:23.573377 env[1315]: time="2025-07-10T00:35:23.573354482Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:23.574109 env[1315]: time="2025-07-10T00:35:23.574083132Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 10 00:35:23.574589 env[1315]: time="2025-07-10T00:35:23.574564756Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 10 00:35:24.758071 env[1315]: time="2025-07-10T00:35:24.757992968Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:24.762015 env[1315]: time="2025-07-10T00:35:24.761962762Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:24.763596 env[1315]: time="2025-07-10T00:35:24.763549428Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:24.766177 env[1315]: time="2025-07-10T00:35:24.766139494Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:24.766931 env[1315]: time="2025-07-10T00:35:24.766901639Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 10 00:35:24.767497 env[1315]: time="2025-07-10T00:35:24.767472191Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 10 00:35:25.740250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount703770748.mount: Deactivated successfully. Jul 10 00:35:25.741229 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 00:35:25.741364 systemd[1]: Stopped kubelet.service. Jul 10 00:35:25.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:25.742900 systemd[1]: Starting kubelet.service... Jul 10 00:35:25.746871 kernel: kauditd_printk_skb: 84 callbacks suppressed Jul 10 00:35:25.746953 kernel: audit: type=1130 audit(1752107725.740:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:25.746980 kernel: audit: type=1131 audit(1752107725.740:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:25.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:25.847336 systemd[1]: Started kubelet.service. Jul 10 00:35:25.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:25.851464 kernel: audit: type=1130 audit(1752107725.846:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:25.888223 kubelet[1633]: E0710 00:35:25.888173 1633 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:35:25.890385 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:35:25.890553 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:35:25.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 10 00:35:25.894459 kernel: audit: type=1131 audit(1752107725.889:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 10 00:35:26.401321 env[1315]: time="2025-07-10T00:35:26.401272581Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:26.403022 env[1315]: time="2025-07-10T00:35:26.402986915Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:26.404425 env[1315]: time="2025-07-10T00:35:26.404384159Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:26.405471 env[1315]: time="2025-07-10T00:35:26.405421192Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:26.405821 env[1315]: time="2025-07-10T00:35:26.405796921Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 10 00:35:26.406760 env[1315]: time="2025-07-10T00:35:26.406728085Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 10 00:35:26.989947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3411615974.mount: Deactivated successfully. Jul 10 00:35:27.819538 env[1315]: time="2025-07-10T00:35:27.819488199Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:27.821308 env[1315]: time="2025-07-10T00:35:27.821273193Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:27.823368 env[1315]: time="2025-07-10T00:35:27.823338075Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:27.825882 env[1315]: time="2025-07-10T00:35:27.825854433Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:27.826682 env[1315]: time="2025-07-10T00:35:27.826654073Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 10 00:35:27.827160 env[1315]: time="2025-07-10T00:35:27.827135556Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 00:35:28.311572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount496177673.mount: Deactivated successfully. Jul 10 00:35:28.316201 env[1315]: time="2025-07-10T00:35:28.316153527Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:28.317524 env[1315]: time="2025-07-10T00:35:28.317488004Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:28.318693 env[1315]: time="2025-07-10T00:35:28.318659663Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:28.320182 env[1315]: time="2025-07-10T00:35:28.320142588Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:28.320829 env[1315]: time="2025-07-10T00:35:28.320800925Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 10 00:35:28.321238 env[1315]: time="2025-07-10T00:35:28.321210106Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 10 00:35:28.885346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount77714773.mount: Deactivated successfully. Jul 10 00:35:31.048697 env[1315]: time="2025-07-10T00:35:31.048642277Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:31.050117 env[1315]: time="2025-07-10T00:35:31.050094754Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:31.052499 env[1315]: time="2025-07-10T00:35:31.052469495Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:31.055087 env[1315]: time="2025-07-10T00:35:31.055050706Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:31.055977 env[1315]: time="2025-07-10T00:35:31.055945368Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 10 00:35:34.915352 systemd[1]: Stopped kubelet.service. Jul 10 00:35:34.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:34.917510 systemd[1]: Starting kubelet.service... Jul 10 00:35:34.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:34.920366 kernel: audit: type=1130 audit(1752107734.915:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:34.920467 kernel: audit: type=1131 audit(1752107734.915:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:34.943336 systemd[1]: Reloading. Jul 10 00:35:35.010016 /usr/lib/systemd/system-generators/torcx-generator[1690]: time="2025-07-10T00:35:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 10 00:35:35.010045 /usr/lib/systemd/system-generators/torcx-generator[1690]: time="2025-07-10T00:35:35Z" level=info msg="torcx already run" Jul 10 00:35:35.095097 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:35:35.095118 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:35:35.112734 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:35:35.183142 systemd[1]: Started kubelet.service. Jul 10 00:35:35.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:35.187903 systemd[1]: Stopping kubelet.service... Jul 10 00:35:35.189057 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:35:35.189292 systemd[1]: Stopped kubelet.service. Jul 10 00:35:35.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:35.190931 systemd[1]: Starting kubelet.service... Jul 10 00:35:35.191823 kernel: audit: type=1130 audit(1752107735.182:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:35.191894 kernel: audit: type=1131 audit(1752107735.187:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:35.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:35.284529 systemd[1]: Started kubelet.service. Jul 10 00:35:35.290612 kernel: audit: type=1130 audit(1752107735.283:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:35.323277 kubelet[1749]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:35:35.323277 kubelet[1749]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 10 00:35:35.323277 kubelet[1749]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:35:35.323670 kubelet[1749]: I0710 00:35:35.323357 1749 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:35:36.214110 kubelet[1749]: I0710 00:35:36.214055 1749 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 10 00:35:36.214110 kubelet[1749]: I0710 00:35:36.214096 1749 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:35:36.214396 kubelet[1749]: I0710 00:35:36.214364 1749 server.go:934] "Client rotation is on, will bootstrap in background" Jul 10 00:35:36.287308 kubelet[1749]: I0710 00:35:36.287263 1749 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:35:36.287507 kubelet[1749]: E0710 00:35:36.287470 1749 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.85:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:35:36.295299 kubelet[1749]: E0710 00:35:36.295245 1749 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 00:35:36.295299 kubelet[1749]: I0710 00:35:36.295279 1749 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 00:35:36.299560 kubelet[1749]: I0710 00:35:36.299528 1749 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:35:36.300835 kubelet[1749]: I0710 00:35:36.300801 1749 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 10 00:35:36.301147 kubelet[1749]: I0710 00:35:36.301079 1749 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:35:36.301335 kubelet[1749]: I0710 00:35:36.301140 1749 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 10 00:35:36.301417 kubelet[1749]: I0710 00:35:36.301338 1749 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:35:36.301417 kubelet[1749]: I0710 00:35:36.301349 1749 container_manager_linux.go:300] "Creating device plugin manager" Jul 10 00:35:36.301656 kubelet[1749]: I0710 00:35:36.301630 1749 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:35:36.307893 kubelet[1749]: I0710 00:35:36.307864 1749 kubelet.go:408] "Attempting to sync node with API server" Jul 10 00:35:36.307975 kubelet[1749]: I0710 00:35:36.307905 1749 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:35:36.307975 kubelet[1749]: I0710 00:35:36.307931 1749 kubelet.go:314] "Adding apiserver pod source" Jul 10 00:35:36.307975 kubelet[1749]: I0710 00:35:36.307945 1749 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:35:36.311380 kubelet[1749]: W0710 00:35:36.311319 1749 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jul 10 00:35:36.311472 kubelet[1749]: E0710 00:35:36.311385 1749 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:35:36.311472 kubelet[1749]: W0710 00:35:36.311319 1749 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jul 10 00:35:36.311472 kubelet[1749]: E0710 00:35:36.311410 1749 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:35:36.322292 kubelet[1749]: I0710 00:35:36.322250 1749 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 10 00:35:36.323099 kubelet[1749]: I0710 00:35:36.323062 1749 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:35:36.323245 kubelet[1749]: W0710 00:35:36.323232 1749 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 00:35:36.324246 kubelet[1749]: I0710 00:35:36.324199 1749 server.go:1274] "Started kubelet" Jul 10 00:35:36.324583 kubelet[1749]: I0710 00:35:36.324506 1749 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:35:36.325486 kubelet[1749]: I0710 00:35:36.325420 1749 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:35:36.325801 kubelet[1749]: I0710 00:35:36.325783 1749 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:35:36.324000 audit[1749]: AVC avc: denied { mac_admin } for pid=1749 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:35:36.326387 kubelet[1749]: I0710 00:35:36.326360 1749 server.go:449] "Adding debug handlers to kubelet server" Jul 10 00:35:36.328148 kubelet[1749]: I0710 00:35:36.328117 1749 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Jul 10 00:35:36.328215 kubelet[1749]: I0710 00:35:36.328166 1749 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Jul 10 00:35:36.328251 kubelet[1749]: I0710 00:35:36.328242 1749 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:35:36.324000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 10 00:35:36.328935 kubelet[1749]: I0710 00:35:36.328836 1749 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 10 00:35:36.328935 kubelet[1749]: I0710 00:35:36.328912 1749 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 10 00:35:36.329032 kubelet[1749]: I0710 00:35:36.328959 1749 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:35:36.329212 kubelet[1749]: I0710 00:35:36.329190 1749 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:35:36.329984 kernel: audit: type=1400 audit(1752107736.324:203): avc: denied { mac_admin } for pid=1749 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:35:36.330089 kernel: audit: type=1401 audit(1752107736.324:203): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 10 00:35:36.330128 kernel: audit: type=1300 audit(1752107736.324:203): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000b564b0 a1=40009fd3e0 a2=4000b56480 a3=25 items=0 ppid=1 pid=1749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:36.324000 audit[1749]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000b564b0 a1=40009fd3e0 a2=4000b56480 a3=25 items=0 ppid=1 pid=1749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:36.330227 kubelet[1749]: W0710 00:35:36.329361 1749 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jul 10 00:35:36.330227 kubelet[1749]: E0710 00:35:36.329406 1749 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:35:36.332367 kubelet[1749]: E0710 00:35:36.332343 1749 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:35:36.333076 kubelet[1749]: E0710 00:35:36.333040 1749 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:35:36.333238 kubelet[1749]: E0710 00:35:36.333212 1749 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="200ms" Jul 10 00:35:36.333611 kernel: audit: type=1327 audit(1752107736.324:203): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 10 00:35:36.324000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 10 00:35:36.334543 kubelet[1749]: I0710 00:35:36.334526 1749 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:35:36.334641 kubelet[1749]: I0710 00:35:36.334630 1749 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:35:36.334789 kubelet[1749]: I0710 00:35:36.334766 1749 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:35:36.334868 kubelet[1749]: E0710 00:35:36.331987 1749 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.85:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.85:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1850bcb3ffd0e2d4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 00:35:36.324170452 +0000 UTC m=+1.036400479,LastTimestamp:2025-07-10 00:35:36.324170452 +0000 UTC m=+1.036400479,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 00:35:36.326000 audit[1749]: AVC avc: denied { mac_admin } for pid=1749 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:35:36.338794 kernel: audit: type=1400 audit(1752107736.326:204): avc: denied { mac_admin } for pid=1749 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:35:36.326000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 10 00:35:36.326000 audit[1749]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000a098e0 a1=40009fd3f8 a2=4000b56540 a3=25 items=0 ppid=1 pid=1749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:36.326000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 10 00:35:36.331000 audit[1762]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1762 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:36.331000 audit[1762]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff15b3780 a2=0 a3=1 items=0 ppid=1749 pid=1762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:36.331000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 10 00:35:36.334000 audit[1763]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1763 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:36.334000 audit[1763]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc3515f00 a2=0 a3=1 items=0 ppid=1749 pid=1763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:36.334000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 10 00:35:36.340000 audit[1767]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1767 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:36.340000 audit[1767]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc644f0d0 a2=0 a3=1 items=0 ppid=1749 pid=1767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:36.340000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 10 00:35:36.342000 audit[1769]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1769 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:36.342000 audit[1769]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff7c09040 a2=0 a3=1 items=0 ppid=1749 pid=1769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:36.342000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 10 00:35:36.349000 audit[1772]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1772 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:36.349000 audit[1772]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffcaaa2090 a2=0 a3=1 items=0 ppid=1749 pid=1772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:36.349000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jul 10 00:35:36.351614 kubelet[1749]: I0710 00:35:36.351580 1749 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:35:36.350000 audit[1775]: NETFILTER_CFG table=mangle:31 family=2 entries=1 op=nft_register_chain pid=1775 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:36.350000 audit[1775]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff80c1650 a2=0 a3=1 items=0 ppid=1749 pid=1775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:36.350000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 10 00:35:36.351000 audit[1776]: NETFILTER_CFG table=mangle:32 family=10 entries=2 op=nft_register_chain pid=1776 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:35:36.351000 audit[1776]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff0e75860 a2=0 a3=1 items=0 ppid=1749 pid=1776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:36.351000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 10 00:35:36.353404 kubelet[1749]: I0710 00:35:36.353380 1749 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:35:36.353504 kubelet[1749]: I0710 00:35:36.353492 1749 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 10 00:35:36.353583 kubelet[1749]: I0710 00:35:36.353573 1749 kubelet.go:2321] "Starting kubelet main sync loop" Jul 10 00:35:36.353685 kubelet[1749]: E0710 00:35:36.353667 1749 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:35:36.354472 kubelet[1749]: W0710 00:35:36.354424 1749 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jul 10 00:35:36.354540 kubelet[1749]: E0710 00:35:36.354486 1749 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:35:36.353000 audit[1779]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=1779 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:35:36.353000 audit[1779]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff2942230 a2=0 a3=1 items=0 ppid=1749 pid=1779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:36.353000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 10 00:35:36.354957 kubelet[1749]: I0710 00:35:36.354854 1749 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 10 00:35:36.354957 kubelet[1749]: I0710 00:35:36.354867 1749 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 10 00:35:36.354957 kubelet[1749]: I0710 00:35:36.354885 1749 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:35:36.353000 audit[1777]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1777 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:36.353000 audit[1777]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcfe8e2f0 a2=0 a3=1 items=0 ppid=1749 pid=1777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:36.353000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 10 00:35:36.354000 audit[1780]: NETFILTER_CFG table=filter:35 family=2 entries=1 op=nft_register_chain pid=1780 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:36.354000 audit[1780]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd269c700 a2=0 a3=1 items=0 ppid=1749 pid=1780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:36.354000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 10 00:35:36.355000 audit[1781]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1781 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:35:36.355000 audit[1781]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffe6a220c0 a2=0 a3=1 items=0 ppid=1749 pid=1781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:36.355000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 10 00:35:36.356000 audit[1782]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1782 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:35:36.356000 audit[1782]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc189d0e0 a2=0 a3=1 items=0 ppid=1749 pid=1782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:36.356000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 10 00:35:36.358534 kubelet[1749]: I0710 00:35:36.358497 1749 policy_none.go:49] "None policy: Start" Jul 10 00:35:36.359625 kubelet[1749]: I0710 00:35:36.359034 1749 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 10 00:35:36.359625 kubelet[1749]: I0710 00:35:36.359065 1749 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:35:36.363066 kubelet[1749]: I0710 00:35:36.363027 1749 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:35:36.361000 audit[1749]: AVC avc: denied { mac_admin } for pid=1749 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:35:36.361000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 10 00:35:36.361000 audit[1749]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000fccbd0 a1=4000f87bf0 a2=4000fccba0 a3=25 items=0 ppid=1 pid=1749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:36.361000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 10 00:35:36.363272 kubelet[1749]: I0710 00:35:36.363109 1749 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Jul 10 00:35:36.363272 kubelet[1749]: I0710 00:35:36.363206 1749 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:35:36.363272 kubelet[1749]: I0710 00:35:36.363217 1749 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:35:36.364603 kubelet[1749]: I0710 00:35:36.364565 1749 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:35:36.368031 kubelet[1749]: E0710 00:35:36.367496 1749 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 10 00:35:36.464198 kubelet[1749]: I0710 00:35:36.464022 1749 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:35:36.464546 kubelet[1749]: E0710 00:35:36.464518 1749 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Jul 10 00:35:36.530836 kubelet[1749]: I0710 00:35:36.530794 1749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe1f8556e95fcde52b4e713faf79649d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fe1f8556e95fcde52b4e713faf79649d\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:35:36.531028 kubelet[1749]: I0710 00:35:36.531011 1749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe1f8556e95fcde52b4e713faf79649d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fe1f8556e95fcde52b4e713faf79649d\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:35:36.531121 kubelet[1749]: I0710 00:35:36.531108 1749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:36.531208 kubelet[1749]: I0710 00:35:36.531195 1749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:36.531352 kubelet[1749]: I0710 00:35:36.531320 1749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:36.531510 kubelet[1749]: I0710 00:35:36.531491 1749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:35:36.531651 kubelet[1749]: I0710 00:35:36.531638 1749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:36.531750 kubelet[1749]: I0710 00:35:36.531735 1749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:36.531833 kubelet[1749]: I0710 00:35:36.531820 1749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe1f8556e95fcde52b4e713faf79649d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fe1f8556e95fcde52b4e713faf79649d\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:35:36.534772 kubelet[1749]: E0710 00:35:36.534662 1749 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="400ms" Jul 10 00:35:36.666720 kubelet[1749]: I0710 00:35:36.666690 1749 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:35:36.667280 kubelet[1749]: E0710 00:35:36.667255 1749 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Jul 10 00:35:36.760275 kubelet[1749]: E0710 00:35:36.760177 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:36.760831 kubelet[1749]: E0710 00:35:36.760810 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:36.761124 env[1315]: time="2025-07-10T00:35:36.761080358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 10 00:35:36.761464 env[1315]: time="2025-07-10T00:35:36.761419153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 10 00:35:36.763077 kubelet[1749]: E0710 00:35:36.762746 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:36.763264 env[1315]: time="2025-07-10T00:35:36.763220056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fe1f8556e95fcde52b4e713faf79649d,Namespace:kube-system,Attempt:0,}" Jul 10 00:35:36.935561 kubelet[1749]: E0710 00:35:36.935492 1749 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="800ms" Jul 10 00:35:37.068789 kubelet[1749]: I0710 00:35:37.068572 1749 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:35:37.069377 kubelet[1749]: E0710 00:35:37.069211 1749 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Jul 10 00:35:37.165158 kubelet[1749]: W0710 00:35:37.165104 1749 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jul 10 00:35:37.165158 kubelet[1749]: E0710 00:35:37.165159 1749 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:35:37.305349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount402981543.mount: Deactivated successfully. Jul 10 00:35:37.310577 env[1315]: time="2025-07-10T00:35:37.310530158Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:37.312810 env[1315]: time="2025-07-10T00:35:37.312768266Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:37.314760 env[1315]: time="2025-07-10T00:35:37.314727591Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:37.317787 env[1315]: time="2025-07-10T00:35:37.317757720Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:37.319139 env[1315]: time="2025-07-10T00:35:37.319056621Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:37.320679 env[1315]: time="2025-07-10T00:35:37.320650496Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:37.322200 env[1315]: time="2025-07-10T00:35:37.322135679Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:37.324541 env[1315]: time="2025-07-10T00:35:37.324511701Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:37.327282 env[1315]: time="2025-07-10T00:35:37.327251933Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:37.330653 env[1315]: time="2025-07-10T00:35:37.330617691Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:37.331911 env[1315]: time="2025-07-10T00:35:37.331880774Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:37.332922 env[1315]: time="2025-07-10T00:35:37.332888179Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:37.354094 env[1315]: time="2025-07-10T00:35:37.354028728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:35:37.354228 env[1315]: time="2025-07-10T00:35:37.354070382Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:35:37.354228 env[1315]: time="2025-07-10T00:35:37.354081335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:35:37.354425 env[1315]: time="2025-07-10T00:35:37.354387662Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/757bd61dabbae085f9c3ea1fb2d3094368c4e23bd69213242298a3ef5ab99eb1 pid=1790 runtime=io.containerd.runc.v2 Jul 10 00:35:37.362493 env[1315]: time="2025-07-10T00:35:37.362050150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:35:37.362493 env[1315]: time="2025-07-10T00:35:37.362110192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:35:37.362493 env[1315]: time="2025-07-10T00:35:37.362120666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:35:37.362493 env[1315]: time="2025-07-10T00:35:37.362316142Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a3258569c166dad92f61fddb601c2951975f4b49e0933b0ac759305de60ca7eb pid=1807 runtime=io.containerd.runc.v2 Jul 10 00:35:37.365991 env[1315]: time="2025-07-10T00:35:37.365930063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:35:37.366145 env[1315]: time="2025-07-10T00:35:37.365969878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:35:37.366145 env[1315]: time="2025-07-10T00:35:37.365981351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:35:37.366581 env[1315]: time="2025-07-10T00:35:37.366538800Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f75139956dcb26dec1e3cf3cf0fd52b151db85dc2bdee494fbbca528b0fa276f pid=1825 runtime=io.containerd.runc.v2 Jul 10 00:35:37.422503 env[1315]: time="2025-07-10T00:35:37.422031647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fe1f8556e95fcde52b4e713faf79649d,Namespace:kube-system,Attempt:0,} returns sandbox id \"757bd61dabbae085f9c3ea1fb2d3094368c4e23bd69213242298a3ef5ab99eb1\"" Jul 10 00:35:37.425001 kubelet[1749]: E0710 00:35:37.424975 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:37.427295 env[1315]: time="2025-07-10T00:35:37.427256832Z" level=info msg="CreateContainer within sandbox \"757bd61dabbae085f9c3ea1fb2d3094368c4e23bd69213242298a3ef5ab99eb1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 00:35:37.435474 env[1315]: time="2025-07-10T00:35:37.435417006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"f75139956dcb26dec1e3cf3cf0fd52b151db85dc2bdee494fbbca528b0fa276f\"" Jul 10 00:35:37.436154 kubelet[1749]: E0710 00:35:37.436132 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:37.437867 env[1315]: time="2025-07-10T00:35:37.437839399Z" level=info msg="CreateContainer within sandbox \"f75139956dcb26dec1e3cf3cf0fd52b151db85dc2bdee494fbbca528b0fa276f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 00:35:37.441090 env[1315]: time="2025-07-10T00:35:37.441052413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3258569c166dad92f61fddb601c2951975f4b49e0933b0ac759305de60ca7eb\"" Jul 10 00:35:37.441554 env[1315]: time="2025-07-10T00:35:37.441104900Z" level=info msg="CreateContainer within sandbox \"757bd61dabbae085f9c3ea1fb2d3094368c4e23bd69213242298a3ef5ab99eb1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6aa18cb9806f2da4d9f8915f52c83e1602db0c0dd35bb057823637d966e83e40\"" Jul 10 00:35:37.442119 kubelet[1749]: E0710 00:35:37.442092 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:37.442345 env[1315]: time="2025-07-10T00:35:37.442320093Z" level=info msg="StartContainer for \"6aa18cb9806f2da4d9f8915f52c83e1602db0c0dd35bb057823637d966e83e40\"" Jul 10 00:35:37.443302 env[1315]: time="2025-07-10T00:35:37.443259541Z" level=info msg="CreateContainer within sandbox \"a3258569c166dad92f61fddb601c2951975f4b49e0933b0ac759305de60ca7eb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 00:35:37.458658 env[1315]: time="2025-07-10T00:35:37.458617017Z" level=info msg="CreateContainer within sandbox \"f75139956dcb26dec1e3cf3cf0fd52b151db85dc2bdee494fbbca528b0fa276f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ce8619773f90a622748bb7e6f7f3fd3a459b6babb8ae219be23ee5a3b9d06352\"" Jul 10 00:35:37.461348 env[1315]: time="2025-07-10T00:35:37.461308240Z" level=info msg="CreateContainer within sandbox \"a3258569c166dad92f61fddb601c2951975f4b49e0933b0ac759305de60ca7eb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7f7c611adf3ef7324ea3f1e0b87f6b69f02968aeb4259b48360b1699efd05bd3\"" Jul 10 00:35:37.462946 env[1315]: time="2025-07-10T00:35:37.462918984Z" level=info msg="StartContainer for \"ce8619773f90a622748bb7e6f7f3fd3a459b6babb8ae219be23ee5a3b9d06352\"" Jul 10 00:35:37.464050 env[1315]: time="2025-07-10T00:35:37.464017291Z" level=info msg="StartContainer for \"7f7c611adf3ef7324ea3f1e0b87f6b69f02968aeb4259b48360b1699efd05bd3\"" Jul 10 00:35:37.520319 kubelet[1749]: W0710 00:35:37.520231 1749 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jul 10 00:35:37.520474 kubelet[1749]: E0710 00:35:37.520322 1749 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:35:37.529360 env[1315]: time="2025-07-10T00:35:37.529310958Z" level=info msg="StartContainer for \"6aa18cb9806f2da4d9f8915f52c83e1602db0c0dd35bb057823637d966e83e40\" returns successfully" Jul 10 00:35:37.543751 env[1315]: time="2025-07-10T00:35:37.543712557Z" level=info msg="StartContainer for \"7f7c611adf3ef7324ea3f1e0b87f6b69f02968aeb4259b48360b1699efd05bd3\" returns successfully" Jul 10 00:35:37.557205 env[1315]: time="2025-07-10T00:35:37.557156919Z" level=info msg="StartContainer for \"ce8619773f90a622748bb7e6f7f3fd3a459b6babb8ae219be23ee5a3b9d06352\" returns successfully" Jul 10 00:35:37.871216 kubelet[1749]: I0710 00:35:37.871184 1749 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:35:38.359218 kubelet[1749]: E0710 00:35:38.359183 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:38.361364 kubelet[1749]: E0710 00:35:38.361333 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:38.363073 kubelet[1749]: E0710 00:35:38.363050 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:39.364576 kubelet[1749]: E0710 00:35:39.364539 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:39.861530 kubelet[1749]: I0710 00:35:39.861487 1749 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 10 00:35:39.861530 kubelet[1749]: E0710 00:35:39.861528 1749 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 10 00:35:40.274058 kubelet[1749]: E0710 00:35:40.274019 1749 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 10 00:35:40.274384 kubelet[1749]: E0710 00:35:40.274366 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:40.313232 kubelet[1749]: I0710 00:35:40.313201 1749 apiserver.go:52] "Watching apiserver" Jul 10 00:35:40.329508 kubelet[1749]: I0710 00:35:40.329473 1749 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 10 00:35:41.742305 systemd[1]: Reloading. Jul 10 00:35:41.785126 /usr/lib/systemd/system-generators/torcx-generator[2041]: time="2025-07-10T00:35:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 10 00:35:41.785157 /usr/lib/systemd/system-generators/torcx-generator[2041]: time="2025-07-10T00:35:41Z" level=info msg="torcx already run" Jul 10 00:35:41.855342 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:35:41.855518 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:35:41.872635 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:35:41.957925 systemd[1]: Stopping kubelet.service... Jul 10 00:35:41.989807 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:35:41.990104 systemd[1]: Stopped kubelet.service. Jul 10 00:35:41.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:41.990817 kernel: kauditd_printk_skb: 43 callbacks suppressed Jul 10 00:35:41.990857 kernel: audit: type=1131 audit(1752107741.989:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:41.992241 systemd[1]: Starting kubelet.service... Jul 10 00:35:42.098261 systemd[1]: Started kubelet.service. Jul 10 00:35:42.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:42.107104 kernel: audit: type=1130 audit(1752107742.098:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:42.142272 kubelet[2094]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:35:42.142725 kubelet[2094]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 10 00:35:42.142869 kubelet[2094]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:35:42.143011 kubelet[2094]: I0710 00:35:42.142983 2094 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:35:42.150642 kubelet[2094]: I0710 00:35:42.150602 2094 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 10 00:35:42.150642 kubelet[2094]: I0710 00:35:42.150631 2094 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:35:42.150926 kubelet[2094]: I0710 00:35:42.150886 2094 server.go:934] "Client rotation is on, will bootstrap in background" Jul 10 00:35:42.152540 kubelet[2094]: I0710 00:35:42.152520 2094 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 10 00:35:42.155782 kubelet[2094]: I0710 00:35:42.155745 2094 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:35:42.159554 kubelet[2094]: E0710 00:35:42.159519 2094 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 00:35:42.159554 kubelet[2094]: I0710 00:35:42.159556 2094 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 00:35:42.161907 kubelet[2094]: I0710 00:35:42.161883 2094 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:35:42.162244 kubelet[2094]: I0710 00:35:42.162217 2094 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 10 00:35:42.162353 kubelet[2094]: I0710 00:35:42.162325 2094 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:35:42.162509 kubelet[2094]: I0710 00:35:42.162348 2094 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 10 00:35:42.162581 kubelet[2094]: I0710 00:35:42.162517 2094 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:35:42.162581 kubelet[2094]: I0710 00:35:42.162528 2094 container_manager_linux.go:300] "Creating device plugin manager" Jul 10 00:35:42.162581 kubelet[2094]: I0710 00:35:42.162558 2094 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:35:42.162656 kubelet[2094]: I0710 00:35:42.162641 2094 kubelet.go:408] "Attempting to sync node with API server" Jul 10 00:35:42.162656 kubelet[2094]: I0710 00:35:42.162654 2094 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:35:42.162699 kubelet[2094]: I0710 00:35:42.162672 2094 kubelet.go:314] "Adding apiserver pod source" Jul 10 00:35:42.162699 kubelet[2094]: I0710 00:35:42.162688 2094 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:35:42.163721 kubelet[2094]: I0710 00:35:42.163703 2094 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 10 00:35:42.164150 kubelet[2094]: I0710 00:35:42.164124 2094 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:35:42.164497 kubelet[2094]: I0710 00:35:42.164484 2094 server.go:1274] "Started kubelet" Jul 10 00:35:42.165577 kubelet[2094]: I0710 00:35:42.165515 2094 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:35:42.165807 kubelet[2094]: I0710 00:35:42.165779 2094 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:35:42.165854 kubelet[2094]: I0710 00:35:42.165838 2094 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:35:42.164000 audit[2094]: AVC avc: denied { mac_admin } for pid=2094 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:35:42.166247 kubelet[2094]: I0710 00:35:42.166178 2094 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Jul 10 00:35:42.166247 kubelet[2094]: I0710 00:35:42.166214 2094 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Jul 10 00:35:42.166247 kubelet[2094]: I0710 00:35:42.166236 2094 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:35:42.164000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 10 00:35:42.174288 kernel: audit: type=1400 audit(1752107742.164:220): avc: denied { mac_admin } for pid=2094 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:35:42.174325 kernel: audit: type=1401 audit(1752107742.164:220): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 10 00:35:42.174341 kernel: audit: type=1300 audit(1752107742.164:220): arch=c00000b7 syscall=5 success=no exit=-22 a0=40009c6ba0 a1=4000514ea0 a2=40009c6b70 a3=25 items=0 ppid=1 pid=2094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:42.164000 audit[2094]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40009c6ba0 a1=4000514ea0 a2=40009c6b70 a3=25 items=0 ppid=1 pid=2094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:42.164000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 10 00:35:42.181083 kernel: audit: type=1327 audit(1752107742.164:220): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 10 00:35:42.164000 audit[2094]: AVC avc: denied { mac_admin } for pid=2094 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:35:42.183304 kubelet[2094]: E0710 00:35:42.183286 2094 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:35:42.183400 kubelet[2094]: I0710 00:35:42.183362 2094 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:35:42.184171 kernel: audit: type=1400 audit(1752107742.164:221): avc: denied { mac_admin } for pid=2094 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:35:42.164000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 10 00:35:42.186498 kubelet[2094]: I0710 00:35:42.183509 2094 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 10 00:35:42.186635 kubelet[2094]: I0710 00:35:42.183522 2094 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 10 00:35:42.186799 kubelet[2094]: I0710 00:35:42.186788 2094 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:35:42.186856 kubelet[2094]: I0710 00:35:42.185536 2094 server.go:449] "Adding debug handlers to kubelet server" Jul 10 00:35:42.186952 kubelet[2094]: I0710 00:35:42.184661 2094 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:35:42.187048 kubelet[2094]: I0710 00:35:42.187012 2094 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:35:42.187237 kernel: audit: type=1401 audit(1752107742.164:221): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 10 00:35:42.164000 audit[2094]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000b8bd20 a1=4000514eb8 a2=40009c6c30 a3=25 items=0 ppid=1 pid=2094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:42.191104 kernel: audit: type=1300 audit(1752107742.164:221): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000b8bd20 a1=4000514eb8 a2=40009c6c30 a3=25 items=0 ppid=1 pid=2094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:42.164000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 10 00:35:42.194375 kernel: audit: type=1327 audit(1752107742.164:221): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 10 00:35:42.196492 kubelet[2094]: I0710 00:35:42.196466 2094 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:35:42.200192 kubelet[2094]: I0710 00:35:42.200156 2094 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:35:42.201757 kubelet[2094]: I0710 00:35:42.201726 2094 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:35:42.201824 kubelet[2094]: I0710 00:35:42.201800 2094 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 10 00:35:42.201824 kubelet[2094]: I0710 00:35:42.201819 2094 kubelet.go:2321] "Starting kubelet main sync loop" Jul 10 00:35:42.201900 kubelet[2094]: E0710 00:35:42.201873 2094 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:35:42.236229 kubelet[2094]: I0710 00:35:42.236203 2094 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 10 00:35:42.236229 kubelet[2094]: I0710 00:35:42.236224 2094 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 10 00:35:42.236329 kubelet[2094]: I0710 00:35:42.236243 2094 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:35:42.236404 kubelet[2094]: I0710 00:35:42.236386 2094 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 00:35:42.236451 kubelet[2094]: I0710 00:35:42.236402 2094 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 00:35:42.236451 kubelet[2094]: I0710 00:35:42.236420 2094 policy_none.go:49] "None policy: Start" Jul 10 00:35:42.237056 kubelet[2094]: I0710 00:35:42.237034 2094 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 10 00:35:42.237116 kubelet[2094]: I0710 00:35:42.237064 2094 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:35:42.237221 kubelet[2094]: I0710 00:35:42.237187 2094 state_mem.go:75] "Updated machine memory state" Jul 10 00:35:42.238371 kubelet[2094]: I0710 00:35:42.238350 2094 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:35:42.236000 audit[2094]: AVC avc: denied { mac_admin } for pid=2094 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:35:42.236000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 10 00:35:42.236000 audit[2094]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000f07620 a1=400106c270 a2=4000f075f0 a3=25 items=0 ppid=1 pid=2094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:42.236000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 10 00:35:42.238580 kubelet[2094]: I0710 00:35:42.238408 2094 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Jul 10 00:35:42.238580 kubelet[2094]: I0710 00:35:42.238556 2094 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:35:42.238632 kubelet[2094]: I0710 00:35:42.238566 2094 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:35:42.240111 kubelet[2094]: I0710 00:35:42.239993 2094 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:35:42.350541 kubelet[2094]: I0710 00:35:42.347786 2094 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:35:42.354794 kubelet[2094]: I0710 00:35:42.354771 2094 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 10 00:35:42.354932 kubelet[2094]: I0710 00:35:42.354920 2094 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 10 00:35:42.488614 kubelet[2094]: I0710 00:35:42.488569 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe1f8556e95fcde52b4e713faf79649d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fe1f8556e95fcde52b4e713faf79649d\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:35:42.488614 kubelet[2094]: I0710 00:35:42.488613 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:42.488760 kubelet[2094]: I0710 00:35:42.488633 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:42.488760 kubelet[2094]: I0710 00:35:42.488649 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe1f8556e95fcde52b4e713faf79649d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fe1f8556e95fcde52b4e713faf79649d\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:35:42.488760 kubelet[2094]: I0710 00:35:42.488667 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe1f8556e95fcde52b4e713faf79649d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fe1f8556e95fcde52b4e713faf79649d\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:35:42.488760 kubelet[2094]: I0710 00:35:42.488684 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:42.488760 kubelet[2094]: I0710 00:35:42.488699 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:42.488889 kubelet[2094]: I0710 00:35:42.488720 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:42.488889 kubelet[2094]: I0710 00:35:42.488763 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:35:42.609754 kubelet[2094]: E0710 00:35:42.609650 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:42.609987 kubelet[2094]: E0710 00:35:42.609675 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:42.610077 kubelet[2094]: E0710 00:35:42.609693 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:43.162926 kubelet[2094]: I0710 00:35:43.162871 2094 apiserver.go:52] "Watching apiserver" Jul 10 00:35:43.187083 kubelet[2094]: I0710 00:35:43.187047 2094 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 10 00:35:43.209998 kubelet[2094]: E0710 00:35:43.209963 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:43.210405 kubelet[2094]: E0710 00:35:43.210365 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:43.214878 kubelet[2094]: E0710 00:35:43.214839 2094 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 10 00:35:43.215078 kubelet[2094]: E0710 00:35:43.215055 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:43.242467 kubelet[2094]: I0710 00:35:43.242081 2094 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.242063349 podStartE2EDuration="1.242063349s" podCreationTimestamp="2025-07-10 00:35:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:35:43.241831974 +0000 UTC m=+1.138411035" watchObservedRunningTime="2025-07-10 00:35:43.242063349 +0000 UTC m=+1.138642370" Jul 10 00:35:43.242467 kubelet[2094]: I0710 00:35:43.242187 2094 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.242182395 podStartE2EDuration="1.242182395s" podCreationTimestamp="2025-07-10 00:35:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:35:43.229171356 +0000 UTC m=+1.125750417" watchObservedRunningTime="2025-07-10 00:35:43.242182395 +0000 UTC m=+1.138761456" Jul 10 00:35:43.262456 kubelet[2094]: I0710 00:35:43.262370 2094 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.262351329 podStartE2EDuration="1.262351329s" podCreationTimestamp="2025-07-10 00:35:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:35:43.254117299 +0000 UTC m=+1.150696360" watchObservedRunningTime="2025-07-10 00:35:43.262351329 +0000 UTC m=+1.158930390" Jul 10 00:35:44.211735 kubelet[2094]: E0710 00:35:44.211690 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:45.478269 kubelet[2094]: E0710 00:35:45.478235 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:46.779389 kubelet[2094]: E0710 00:35:46.779355 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:48.188839 kubelet[2094]: I0710 00:35:48.188802 2094 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 00:35:48.189572 env[1315]: time="2025-07-10T00:35:48.189518035Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 00:35:48.189821 kubelet[2094]: I0710 00:35:48.189665 2094 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 00:35:49.038140 kubelet[2094]: I0710 00:35:49.038094 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/397221f2-418c-409c-be01-5db956ec6e7a-lib-modules\") pod \"kube-proxy-g2ccc\" (UID: \"397221f2-418c-409c-be01-5db956ec6e7a\") " pod="kube-system/kube-proxy-g2ccc" Jul 10 00:35:49.038140 kubelet[2094]: I0710 00:35:49.038134 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/397221f2-418c-409c-be01-5db956ec6e7a-kube-proxy\") pod \"kube-proxy-g2ccc\" (UID: \"397221f2-418c-409c-be01-5db956ec6e7a\") " pod="kube-system/kube-proxy-g2ccc" Jul 10 00:35:49.038351 kubelet[2094]: I0710 00:35:49.038152 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/397221f2-418c-409c-be01-5db956ec6e7a-xtables-lock\") pod \"kube-proxy-g2ccc\" (UID: \"397221f2-418c-409c-be01-5db956ec6e7a\") " pod="kube-system/kube-proxy-g2ccc" Jul 10 00:35:49.038351 kubelet[2094]: I0710 00:35:49.038189 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmmb4\" (UniqueName: \"kubernetes.io/projected/397221f2-418c-409c-be01-5db956ec6e7a-kube-api-access-xmmb4\") pod \"kube-proxy-g2ccc\" (UID: \"397221f2-418c-409c-be01-5db956ec6e7a\") " pod="kube-system/kube-proxy-g2ccc" Jul 10 00:35:49.146362 kubelet[2094]: E0710 00:35:49.146312 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:49.147737 kubelet[2094]: I0710 00:35:49.147706 2094 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 10 00:35:49.222759 kubelet[2094]: E0710 00:35:49.222722 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:49.272670 kubelet[2094]: E0710 00:35:49.272637 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:49.273455 env[1315]: time="2025-07-10T00:35:49.273396044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g2ccc,Uid:397221f2-418c-409c-be01-5db956ec6e7a,Namespace:kube-system,Attempt:0,}" Jul 10 00:35:49.288313 env[1315]: time="2025-07-10T00:35:49.288187448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:35:49.288442 env[1315]: time="2025-07-10T00:35:49.288235967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:35:49.288442 env[1315]: time="2025-07-10T00:35:49.288246926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:35:49.288827 env[1315]: time="2025-07-10T00:35:49.288795030Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd3aaab889977d196e0c49afa8c8f512d702299b211fe96d89a83337322c0bef pid=2154 runtime=io.containerd.runc.v2 Jul 10 00:35:49.340167 kubelet[2094]: I0710 00:35:49.340131 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt4n7\" (UniqueName: \"kubernetes.io/projected/30e30c2a-a62d-4d47-9cff-87c9b77765f3-kube-api-access-bt4n7\") pod \"tigera-operator-5bf8dfcb4-hknqm\" (UID: \"30e30c2a-a62d-4d47-9cff-87c9b77765f3\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-hknqm" Jul 10 00:35:49.340400 kubelet[2094]: I0710 00:35:49.340374 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/30e30c2a-a62d-4d47-9cff-87c9b77765f3-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-hknqm\" (UID: \"30e30c2a-a62d-4d47-9cff-87c9b77765f3\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-hknqm" Jul 10 00:35:49.354301 env[1315]: time="2025-07-10T00:35:49.354262342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g2ccc,Uid:397221f2-418c-409c-be01-5db956ec6e7a,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd3aaab889977d196e0c49afa8c8f512d702299b211fe96d89a83337322c0bef\"" Jul 10 00:35:49.355062 kubelet[2094]: E0710 00:35:49.355036 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:49.358267 env[1315]: time="2025-07-10T00:35:49.358228185Z" level=info msg="CreateContainer within sandbox \"dd3aaab889977d196e0c49afa8c8f512d702299b211fe96d89a83337322c0bef\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 00:35:49.370150 env[1315]: time="2025-07-10T00:35:49.370095715Z" level=info msg="CreateContainer within sandbox \"dd3aaab889977d196e0c49afa8c8f512d702299b211fe96d89a83337322c0bef\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"415cd8df34231a11847bb7410d844861a44b59dfddd59be44f37af5189773c06\"" Jul 10 00:35:49.370765 env[1315]: time="2025-07-10T00:35:49.370733817Z" level=info msg="StartContainer for \"415cd8df34231a11847bb7410d844861a44b59dfddd59be44f37af5189773c06\"" Jul 10 00:35:49.456032 env[1315]: time="2025-07-10T00:35:49.455988425Z" level=info msg="StartContainer for \"415cd8df34231a11847bb7410d844861a44b59dfddd59be44f37af5189773c06\" returns successfully" Jul 10 00:35:49.640466 env[1315]: time="2025-07-10T00:35:49.640351235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-hknqm,Uid:30e30c2a-a62d-4d47-9cff-87c9b77765f3,Namespace:tigera-operator,Attempt:0,}" Jul 10 00:35:49.656006 env[1315]: time="2025-07-10T00:35:49.655927136Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:35:49.656006 env[1315]: time="2025-07-10T00:35:49.655970935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:35:49.656006 env[1315]: time="2025-07-10T00:35:49.655981614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:35:49.656180 env[1315]: time="2025-07-10T00:35:49.656129410Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8e00c6e4504cf67aa8ec76842fd9659a92fe1e116e5570b174c6926e357f3e26 pid=2240 runtime=io.containerd.runc.v2 Jul 10 00:35:49.715000 audit[2297]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2297 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:35:49.717899 kernel: kauditd_printk_skb: 4 callbacks suppressed Jul 10 00:35:49.717966 kernel: audit: type=1325 audit(1752107749.715:223): table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2297 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:35:49.715000 audit[2297]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd744ca90 a2=0 a3=1 items=0 ppid=2205 pid=2297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.724485 kernel: audit: type=1300 audit(1752107749.715:223): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd744ca90 a2=0 a3=1 items=0 ppid=2205 pid=2297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.724547 kernel: audit: type=1327 audit(1752107749.715:223): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 10 00:35:49.715000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 10 00:35:49.725670 env[1315]: time="2025-07-10T00:35:49.725636123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-hknqm,Uid:30e30c2a-a62d-4d47-9cff-87c9b77765f3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8e00c6e4504cf67aa8ec76842fd9659a92fe1e116e5570b174c6926e357f3e26\"" Jul 10 00:35:49.722000 audit[2294]: NETFILTER_CFG table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2294 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:49.727839 kernel: audit: type=1325 audit(1752107749.722:224): table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2294 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:49.722000 audit[2294]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc507d730 a2=0 a3=1 items=0 ppid=2205 pid=2294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.732400 kernel: audit: type=1300 audit(1752107749.722:224): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc507d730 a2=0 a3=1 items=0 ppid=2205 pid=2294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.722000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 10 00:35:49.733659 env[1315]: time="2025-07-10T00:35:49.733625447Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 10 00:35:49.735118 kernel: audit: type=1327 audit(1752107749.722:224): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 10 00:35:49.735512 kernel: audit: type=1325 audit(1752107749.724:225): table=nat:40 family=2 entries=1 op=nft_register_chain pid=2301 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:49.724000 audit[2301]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2301 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:49.724000 audit[2301]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe968fe90 a2=0 a3=1 items=0 ppid=2205 pid=2301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.741245 kernel: audit: type=1300 audit(1752107749.724:225): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe968fe90 a2=0 a3=1 items=0 ppid=2205 pid=2301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.741310 kernel: audit: type=1327 audit(1752107749.724:225): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 10 00:35:49.724000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 10 00:35:49.727000 audit[2302]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2302 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:49.744883 kernel: audit: type=1325 audit(1752107749.727:226): table=filter:41 family=2 entries=1 op=nft_register_chain pid=2302 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:49.727000 audit[2302]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc431d420 a2=0 a3=1 items=0 ppid=2205 pid=2302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.727000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 10 00:35:49.731000 audit[2303]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2303 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:35:49.731000 audit[2303]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffec9b7240 a2=0 a3=1 items=0 ppid=2205 pid=2303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.731000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 10 00:35:49.734000 audit[2304]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2304 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:35:49.734000 audit[2304]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcba24390 a2=0 a3=1 items=0 ppid=2205 pid=2304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.734000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 10 00:35:49.822000 audit[2305]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2305 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:49.822000 audit[2305]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffcdfec5a0 a2=0 a3=1 items=0 ppid=2205 pid=2305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.822000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 10 00:35:49.828000 audit[2307]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2307 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:49.828000 audit[2307]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffffce2bb00 a2=0 a3=1 items=0 ppid=2205 pid=2307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.828000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jul 10 00:35:49.831000 audit[2310]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2310 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:49.831000 audit[2310]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffecaace90 a2=0 a3=1 items=0 ppid=2205 pid=2310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.831000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jul 10 00:35:49.832000 audit[2311]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2311 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:49.832000 audit[2311]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff1676990 a2=0 a3=1 items=0 ppid=2205 pid=2311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.832000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 10 00:35:49.834000 audit[2313]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2313 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:49.834000 audit[2313]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff2d95bf0 a2=0 a3=1 items=0 ppid=2205 pid=2313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.834000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 10 00:35:49.835000 audit[2314]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2314 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:49.835000 audit[2314]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd3692420 a2=0 a3=1 items=0 ppid=2205 pid=2314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.835000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 10 00:35:49.838000 audit[2316]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2316 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:49.838000 audit[2316]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc71d09d0 a2=0 a3=1 items=0 ppid=2205 pid=2316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.838000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 10 00:35:49.842000 audit[2319]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2319 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:49.842000 audit[2319]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc9888c20 a2=0 a3=1 items=0 ppid=2205 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.842000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jul 10 00:35:49.843000 audit[2320]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2320 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:49.843000 audit[2320]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe6b9e580 a2=0 a3=1 items=0 ppid=2205 pid=2320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.843000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 10 00:35:49.845000 audit[2322]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2322 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:49.845000 audit[2322]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd1ee8980 a2=0 a3=1 items=0 ppid=2205 pid=2322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.845000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 10 00:35:49.846000 audit[2323]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2323 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:49.846000 audit[2323]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc147d3b0 a2=0 a3=1 items=0 ppid=2205 pid=2323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.846000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 10 00:35:49.849000 audit[2325]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2325 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:49.849000 audit[2325]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd90217e0 a2=0 a3=1 items=0 ppid=2205 pid=2325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.849000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 10 00:35:49.852000 audit[2328]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2328 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:49.852000 audit[2328]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc806e4b0 a2=0 a3=1 items=0 ppid=2205 pid=2328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.852000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 10 00:35:49.856000 audit[2331]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2331 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:49.856000 audit[2331]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffff5ad30 a2=0 a3=1 items=0 ppid=2205 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.856000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 10 00:35:49.857000 audit[2332]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2332 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:49.857000 audit[2332]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffc5ba7e0 a2=0 a3=1 items=0 ppid=2205 pid=2332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.857000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 10 00:35:49.859000 audit[2334]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2334 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:49.859000 audit[2334]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff69aa530 a2=0 a3=1 items=0 ppid=2205 pid=2334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.859000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 10 00:35:49.862000 audit[2337]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2337 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:49.862000 audit[2337]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffea95b280 a2=0 a3=1 items=0 ppid=2205 pid=2337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.862000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 10 00:35:49.863000 audit[2338]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2338 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:49.863000 audit[2338]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd1294fb0 a2=0 a3=1 items=0 ppid=2205 pid=2338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.863000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 10 00:35:49.865000 audit[2340]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2340 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 00:35:49.865000 audit[2340]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=fffff705d6c0 a2=0 a3=1 items=0 ppid=2205 pid=2340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.865000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 10 00:35:49.890000 audit[2346]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2346 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:35:49.890000 audit[2346]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcd651a60 a2=0 a3=1 items=0 ppid=2205 pid=2346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.890000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:35:49.904000 audit[2346]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2346 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:35:49.904000 audit[2346]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffcd651a60 a2=0 a3=1 items=0 ppid=2205 pid=2346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.904000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:35:49.906000 audit[2351]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2351 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:35:49.906000 audit[2351]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc7f71960 a2=0 a3=1 items=0 ppid=2205 pid=2351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.906000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 10 00:35:49.908000 audit[2353]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2353 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:35:49.908000 audit[2353]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffe5e58da0 a2=0 a3=1 items=0 ppid=2205 pid=2353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.908000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jul 10 00:35:49.911000 audit[2356]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2356 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:35:49.911000 audit[2356]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffee42f3a0 a2=0 a3=1 items=0 ppid=2205 pid=2356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.911000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jul 10 00:35:49.912000 audit[2357]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2357 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:35:49.912000 audit[2357]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc9fb31a0 a2=0 a3=1 items=0 ppid=2205 pid=2357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.912000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 10 00:35:49.914000 audit[2359]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2359 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:35:49.914000 audit[2359]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc7e82a00 a2=0 a3=1 items=0 ppid=2205 pid=2359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.914000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 10 00:35:49.915000 audit[2360]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2360 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:35:49.915000 audit[2360]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff7fca160 a2=0 a3=1 items=0 ppid=2205 pid=2360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.915000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 10 00:35:49.917000 audit[2362]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2362 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:35:49.917000 audit[2362]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc08ac230 a2=0 a3=1 items=0 ppid=2205 pid=2362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.917000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jul 10 00:35:49.921000 audit[2365]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2365 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:35:49.921000 audit[2365]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffd7ee7cd0 a2=0 a3=1 items=0 ppid=2205 pid=2365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.921000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 10 00:35:49.922000 audit[2366]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2366 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:35:49.922000 audit[2366]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffff472c20 a2=0 a3=1 items=0 ppid=2205 pid=2366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.922000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 10 00:35:49.924000 audit[2368]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2368 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:35:49.924000 audit[2368]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdd6fc880 a2=0 a3=1 items=0 ppid=2205 pid=2368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.924000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 10 00:35:49.925000 audit[2369]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2369 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:35:49.925000 audit[2369]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe0cb1cd0 a2=0 a3=1 items=0 ppid=2205 pid=2369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.925000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 10 00:35:49.928000 audit[2371]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2371 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:35:49.928000 audit[2371]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdf606080 a2=0 a3=1 items=0 ppid=2205 pid=2371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.928000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 10 00:35:49.931000 audit[2374]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2374 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:35:49.931000 audit[2374]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd16e49c0 a2=0 a3=1 items=0 ppid=2205 pid=2374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.931000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 10 00:35:49.935000 audit[2377]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2377 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:35:49.935000 audit[2377]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd6323540 a2=0 a3=1 items=0 ppid=2205 pid=2377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.935000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jul 10 00:35:49.936000 audit[2378]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2378 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:35:49.936000 audit[2378]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc95f22c0 a2=0 a3=1 items=0 ppid=2205 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.936000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 10 00:35:49.938000 audit[2380]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2380 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:35:49.938000 audit[2380]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=fffff2d95b70 a2=0 a3=1 items=0 ppid=2205 pid=2380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.938000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 10 00:35:49.942000 audit[2383]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2383 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:35:49.942000 audit[2383]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffc31f03c0 a2=0 a3=1 items=0 ppid=2205 pid=2383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.942000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 10 00:35:49.943000 audit[2384]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2384 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:35:49.943000 audit[2384]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffeda40b0 a2=0 a3=1 items=0 ppid=2205 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.943000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 10 00:35:49.945000 audit[2386]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2386 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:35:49.945000 audit[2386]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffc6e9c030 a2=0 a3=1 items=0 ppid=2205 pid=2386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.945000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 10 00:35:49.946000 audit[2387]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2387 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:35:49.946000 audit[2387]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffea673190 a2=0 a3=1 items=0 ppid=2205 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.946000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 10 00:35:49.948000 audit[2389]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2389 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:35:49.948000 audit[2389]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffeb093450 a2=0 a3=1 items=0 ppid=2205 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.948000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 10 00:35:49.952000 audit[2392]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2392 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 00:35:49.952000 audit[2392]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffdfdada20 a2=0 a3=1 items=0 ppid=2205 pid=2392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.952000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 10 00:35:49.954000 audit[2394]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2394 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 10 00:35:49.954000 audit[2394]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=fffff01badf0 a2=0 a3=1 items=0 ppid=2205 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.954000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:35:49.955000 audit[2394]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2394 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 10 00:35:49.955000 audit[2394]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=fffff01badf0 a2=0 a3=1 items=0 ppid=2205 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:49.955000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:35:50.226060 kubelet[2094]: E0710 00:35:50.225763 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:50.237725 kubelet[2094]: I0710 00:35:50.237657 2094 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g2ccc" podStartSLOduration=2.237627056 podStartE2EDuration="2.237627056s" podCreationTimestamp="2025-07-10 00:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:35:50.234301308 +0000 UTC m=+8.130880369" watchObservedRunningTime="2025-07-10 00:35:50.237627056 +0000 UTC m=+8.134206117" Jul 10 00:35:51.192974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4045241098.mount: Deactivated successfully. Jul 10 00:35:51.737723 env[1315]: time="2025-07-10T00:35:51.737675012Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:51.739329 env[1315]: time="2025-07-10T00:35:51.739291449Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:51.741348 env[1315]: time="2025-07-10T00:35:51.741316916Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:51.742962 env[1315]: time="2025-07-10T00:35:51.742928753Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:51.743573 env[1315]: time="2025-07-10T00:35:51.743539137Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 10 00:35:51.747166 env[1315]: time="2025-07-10T00:35:51.747124843Z" level=info msg="CreateContainer within sandbox \"8e00c6e4504cf67aa8ec76842fd9659a92fe1e116e5570b174c6926e357f3e26\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 10 00:35:51.756945 env[1315]: time="2025-07-10T00:35:51.756880506Z" level=info msg="CreateContainer within sandbox \"8e00c6e4504cf67aa8ec76842fd9659a92fe1e116e5570b174c6926e357f3e26\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a0b22a37df78a1fbff38ac515b3919f63f8fc58418eeb7a88c14ef4ae87c1818\"" Jul 10 00:35:51.757486 env[1315]: time="2025-07-10T00:35:51.757464170Z" level=info msg="StartContainer for \"a0b22a37df78a1fbff38ac515b3919f63f8fc58418eeb7a88c14ef4ae87c1818\"" Jul 10 00:35:51.818937 env[1315]: time="2025-07-10T00:35:51.818892431Z" level=info msg="StartContainer for \"a0b22a37df78a1fbff38ac515b3919f63f8fc58418eeb7a88c14ef4ae87c1818\" returns successfully" Jul 10 00:35:52.239782 kubelet[2094]: I0710 00:35:52.239577 2094 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-hknqm" podStartSLOduration=1.22737569 podStartE2EDuration="3.239561358s" podCreationTimestamp="2025-07-10 00:35:49 +0000 UTC" firstStartedPulling="2025-07-10 00:35:49.73283707 +0000 UTC m=+7.629416131" lastFinishedPulling="2025-07-10 00:35:51.745022738 +0000 UTC m=+9.641601799" observedRunningTime="2025-07-10 00:35:52.239391922 +0000 UTC m=+10.135971023" watchObservedRunningTime="2025-07-10 00:35:52.239561358 +0000 UTC m=+10.136140379" Jul 10 00:35:55.494358 kubelet[2094]: E0710 00:35:55.494323 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:56.548000 audit[2466]: NETFILTER_CFG table=filter:89 family=2 entries=14 op=nft_register_rule pid=2466 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:35:56.553113 kernel: kauditd_printk_skb: 143 callbacks suppressed Jul 10 00:35:56.553198 kernel: audit: type=1325 audit(1752107756.548:274): table=filter:89 family=2 entries=14 op=nft_register_rule pid=2466 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:35:56.553224 kernel: audit: type=1300 audit(1752107756.548:274): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe5b4a8a0 a2=0 a3=1 items=0 ppid=2205 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:56.548000 audit[2466]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe5b4a8a0 a2=0 a3=1 items=0 ppid=2205 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:56.548000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:35:56.558666 kernel: audit: type=1327 audit(1752107756.548:274): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:35:56.560000 audit[2466]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2466 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:35:56.564449 kernel: audit: type=1325 audit(1752107756.560:275): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2466 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:35:56.564493 kernel: audit: type=1300 audit(1752107756.560:275): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe5b4a8a0 a2=0 a3=1 items=0 ppid=2205 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:56.560000 audit[2466]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe5b4a8a0 a2=0 a3=1 items=0 ppid=2205 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:56.560000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:35:56.571462 kernel: audit: type=1327 audit(1752107756.560:275): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:35:56.577000 audit[2468]: NETFILTER_CFG table=filter:91 family=2 entries=15 op=nft_register_rule pid=2468 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:35:56.577000 audit[2468]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffff9d3abd0 a2=0 a3=1 items=0 ppid=2205 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:56.584502 kernel: audit: type=1325 audit(1752107756.577:276): table=filter:91 family=2 entries=15 op=nft_register_rule pid=2468 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:35:56.584566 kernel: audit: type=1300 audit(1752107756.577:276): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffff9d3abd0 a2=0 a3=1 items=0 ppid=2205 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:56.584588 kernel: audit: type=1327 audit(1752107756.577:276): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:35:56.577000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:35:56.588000 audit[2468]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2468 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:35:56.588000 audit[2468]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff9d3abd0 a2=0 a3=1 items=0 ppid=2205 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:35:56.592462 kernel: audit: type=1325 audit(1752107756.588:277): table=nat:92 family=2 entries=12 op=nft_register_rule pid=2468 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:35:56.588000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:35:56.787319 kubelet[2094]: E0710 00:35:56.787277 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:57.308011 sudo[1480]: pam_unix(sudo:session): session closed for user root Jul 10 00:35:57.306000 audit[1480]: USER_END pid=1480 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 10 00:35:57.306000 audit[1480]: CRED_DISP pid=1480 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 10 00:35:57.314537 sshd[1474]: pam_unix(sshd:session): session closed for user core Jul 10 00:35:57.313000 audit[1474]: USER_END pid=1474 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:35:57.314000 audit[1474]: CRED_DISP pid=1474 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:35:57.317145 systemd[1]: sshd@6-10.0.0.85:22-10.0.0.1:35250.service: Deactivated successfully. Jul 10 00:35:57.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.85:22-10.0.0.1:35250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:35:57.318083 systemd-logind[1302]: Session 7 logged out. Waiting for processes to exit. Jul 10 00:35:57.318103 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 00:35:57.318888 systemd-logind[1302]: Removed session 7. Jul 10 00:35:58.966523 update_engine[1305]: I0710 00:35:58.966480 1305 update_attempter.cc:509] Updating boot flags... Jul 10 00:36:01.823000 audit[2505]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2505 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:01.825952 kernel: kauditd_printk_skb: 7 callbacks suppressed Jul 10 00:36:01.826025 kernel: audit: type=1325 audit(1752107761.823:283): table=filter:93 family=2 entries=17 op=nft_register_rule pid=2505 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:01.823000 audit[2505]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc7546210 a2=0 a3=1 items=0 ppid=2205 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:01.832102 kernel: audit: type=1300 audit(1752107761.823:283): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc7546210 a2=0 a3=1 items=0 ppid=2205 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:01.832459 kernel: audit: type=1327 audit(1752107761.823:283): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:01.823000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:01.833000 audit[2505]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2505 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:01.837456 kernel: audit: type=1325 audit(1752107761.833:284): table=nat:94 family=2 entries=12 op=nft_register_rule pid=2505 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:01.833000 audit[2505]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc7546210 a2=0 a3=1 items=0 ppid=2205 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:01.842302 kernel: audit: type=1300 audit(1752107761.833:284): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc7546210 a2=0 a3=1 items=0 ppid=2205 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:01.842375 kernel: audit: type=1327 audit(1752107761.833:284): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:01.833000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:01.865000 audit[2507]: NETFILTER_CFG table=filter:95 family=2 entries=18 op=nft_register_rule pid=2507 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:01.865000 audit[2507]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc1f79310 a2=0 a3=1 items=0 ppid=2205 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:01.872768 kernel: audit: type=1325 audit(1752107761.865:285): table=filter:95 family=2 entries=18 op=nft_register_rule pid=2507 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:01.872854 kernel: audit: type=1300 audit(1752107761.865:285): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc1f79310 a2=0 a3=1 items=0 ppid=2205 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:01.872880 kernel: audit: type=1327 audit(1752107761.865:285): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:01.865000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:01.875000 audit[2507]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2507 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:01.875000 audit[2507]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc1f79310 a2=0 a3=1 items=0 ppid=2205 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:01.875000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:01.884468 kernel: audit: type=1325 audit(1752107761.875:286): table=nat:96 family=2 entries=12 op=nft_register_rule pid=2507 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:01.928977 kubelet[2094]: I0710 00:36:01.928934 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/73728375-b670-4065-833b-6783b7185986-typha-certs\") pod \"calico-typha-54d46dd7d7-gbr6s\" (UID: \"73728375-b670-4065-833b-6783b7185986\") " pod="calico-system/calico-typha-54d46dd7d7-gbr6s" Jul 10 00:36:01.929424 kubelet[2094]: I0710 00:36:01.929402 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsw47\" (UniqueName: \"kubernetes.io/projected/73728375-b670-4065-833b-6783b7185986-kube-api-access-xsw47\") pod \"calico-typha-54d46dd7d7-gbr6s\" (UID: \"73728375-b670-4065-833b-6783b7185986\") " pod="calico-system/calico-typha-54d46dd7d7-gbr6s" Jul 10 00:36:01.929559 kubelet[2094]: I0710 00:36:01.929543 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73728375-b670-4065-833b-6783b7185986-tigera-ca-bundle\") pod \"calico-typha-54d46dd7d7-gbr6s\" (UID: \"73728375-b670-4065-833b-6783b7185986\") " pod="calico-system/calico-typha-54d46dd7d7-gbr6s" Jul 10 00:36:02.130844 kubelet[2094]: I0710 00:36:02.130737 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2c63a94-5a22-4001-b055-cfc2ddfc5f34-lib-modules\") pod \"calico-node-km5zf\" (UID: \"a2c63a94-5a22-4001-b055-cfc2ddfc5f34\") " pod="calico-system/calico-node-km5zf" Jul 10 00:36:02.131070 kubelet[2094]: I0710 00:36:02.131054 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a2c63a94-5a22-4001-b055-cfc2ddfc5f34-policysync\") pod \"calico-node-km5zf\" (UID: \"a2c63a94-5a22-4001-b055-cfc2ddfc5f34\") " pod="calico-system/calico-node-km5zf" Jul 10 00:36:02.131158 kubelet[2094]: I0710 00:36:02.131140 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a2c63a94-5a22-4001-b055-cfc2ddfc5f34-flexvol-driver-host\") pod \"calico-node-km5zf\" (UID: \"a2c63a94-5a22-4001-b055-cfc2ddfc5f34\") " pod="calico-system/calico-node-km5zf" Jul 10 00:36:02.131261 kubelet[2094]: I0710 00:36:02.131248 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2c63a94-5a22-4001-b055-cfc2ddfc5f34-xtables-lock\") pod \"calico-node-km5zf\" (UID: \"a2c63a94-5a22-4001-b055-cfc2ddfc5f34\") " pod="calico-system/calico-node-km5zf" Jul 10 00:36:02.131369 kubelet[2094]: I0710 00:36:02.131357 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrq65\" (UniqueName: \"kubernetes.io/projected/a2c63a94-5a22-4001-b055-cfc2ddfc5f34-kube-api-access-mrq65\") pod \"calico-node-km5zf\" (UID: \"a2c63a94-5a22-4001-b055-cfc2ddfc5f34\") " pod="calico-system/calico-node-km5zf" Jul 10 00:36:02.131510 kubelet[2094]: I0710 00:36:02.131468 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2c63a94-5a22-4001-b055-cfc2ddfc5f34-tigera-ca-bundle\") pod \"calico-node-km5zf\" (UID: \"a2c63a94-5a22-4001-b055-cfc2ddfc5f34\") " pod="calico-system/calico-node-km5zf" Jul 10 00:36:02.131622 kubelet[2094]: I0710 00:36:02.131609 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a2c63a94-5a22-4001-b055-cfc2ddfc5f34-cni-bin-dir\") pod \"calico-node-km5zf\" (UID: \"a2c63a94-5a22-4001-b055-cfc2ddfc5f34\") " pod="calico-system/calico-node-km5zf" Jul 10 00:36:02.131743 kubelet[2094]: I0710 00:36:02.131730 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a2c63a94-5a22-4001-b055-cfc2ddfc5f34-cni-log-dir\") pod \"calico-node-km5zf\" (UID: \"a2c63a94-5a22-4001-b055-cfc2ddfc5f34\") " pod="calico-system/calico-node-km5zf" Jul 10 00:36:02.131850 kubelet[2094]: I0710 00:36:02.131838 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a2c63a94-5a22-4001-b055-cfc2ddfc5f34-cni-net-dir\") pod \"calico-node-km5zf\" (UID: \"a2c63a94-5a22-4001-b055-cfc2ddfc5f34\") " pod="calico-system/calico-node-km5zf" Jul 10 00:36:02.131963 kubelet[2094]: I0710 00:36:02.131950 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a2c63a94-5a22-4001-b055-cfc2ddfc5f34-var-lib-calico\") pod \"calico-node-km5zf\" (UID: \"a2c63a94-5a22-4001-b055-cfc2ddfc5f34\") " pod="calico-system/calico-node-km5zf" Jul 10 00:36:02.132094 kubelet[2094]: I0710 00:36:02.132080 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a2c63a94-5a22-4001-b055-cfc2ddfc5f34-var-run-calico\") pod \"calico-node-km5zf\" (UID: \"a2c63a94-5a22-4001-b055-cfc2ddfc5f34\") " pod="calico-system/calico-node-km5zf" Jul 10 00:36:02.132185 kubelet[2094]: I0710 00:36:02.132174 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a2c63a94-5a22-4001-b055-cfc2ddfc5f34-node-certs\") pod \"calico-node-km5zf\" (UID: \"a2c63a94-5a22-4001-b055-cfc2ddfc5f34\") " pod="calico-system/calico-node-km5zf" Jul 10 00:36:02.157183 kubelet[2094]: E0710 00:36:02.157149 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:02.158144 env[1315]: time="2025-07-10T00:36:02.158100218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54d46dd7d7-gbr6s,Uid:73728375-b670-4065-833b-6783b7185986,Namespace:calico-system,Attempt:0,}" Jul 10 00:36:02.185464 env[1315]: time="2025-07-10T00:36:02.181597267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:36:02.185464 env[1315]: time="2025-07-10T00:36:02.181642906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:36:02.185464 env[1315]: time="2025-07-10T00:36:02.181652666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:36:02.187109 env[1315]: time="2025-07-10T00:36:02.186547033Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bc8d51f80119d37d8d1d508989aa211889002b43662be9260cd3455f5475df9f pid=2516 runtime=io.containerd.runc.v2 Jul 10 00:36:02.234382 kubelet[2094]: E0710 00:36:02.234351 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.234584 kubelet[2094]: W0710 00:36:02.234568 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.234675 kubelet[2094]: E0710 00:36:02.234662 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.234915 kubelet[2094]: E0710 00:36:02.234897 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.234972 kubelet[2094]: W0710 00:36:02.234916 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.234972 kubelet[2094]: E0710 00:36:02.234935 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.235121 kubelet[2094]: E0710 00:36:02.235108 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.235163 kubelet[2094]: W0710 00:36:02.235122 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.235163 kubelet[2094]: E0710 00:36:02.235132 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.235293 kubelet[2094]: E0710 00:36:02.235280 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.235334 kubelet[2094]: W0710 00:36:02.235294 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.235398 kubelet[2094]: E0710 00:36:02.235382 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.235501 kubelet[2094]: E0710 00:36:02.235452 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.235586 kubelet[2094]: W0710 00:36:02.235573 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.235691 kubelet[2094]: E0710 00:36:02.235668 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.235878 kubelet[2094]: E0710 00:36:02.235865 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.235960 kubelet[2094]: W0710 00:36:02.235947 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.236097 kubelet[2094]: E0710 00:36:02.236067 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.236283 kubelet[2094]: E0710 00:36:02.236270 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.236367 kubelet[2094]: W0710 00:36:02.236355 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.236495 kubelet[2094]: E0710 00:36:02.236466 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.236718 kubelet[2094]: E0710 00:36:02.236699 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.236798 kubelet[2094]: W0710 00:36:02.236785 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.236878 kubelet[2094]: E0710 00:36:02.236866 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.237152 kubelet[2094]: E0710 00:36:02.237133 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.237152 kubelet[2094]: W0710 00:36:02.237150 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.237242 kubelet[2094]: E0710 00:36:02.237166 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.238418 kubelet[2094]: E0710 00:36:02.237499 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.238418 kubelet[2094]: W0710 00:36:02.237522 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.238418 kubelet[2094]: E0710 00:36:02.237582 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.238418 kubelet[2094]: E0710 00:36:02.237758 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.238418 kubelet[2094]: W0710 00:36:02.237768 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.238418 kubelet[2094]: E0710 00:36:02.237789 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.238418 kubelet[2094]: E0710 00:36:02.237991 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.238418 kubelet[2094]: W0710 00:36:02.238023 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.238418 kubelet[2094]: E0710 00:36:02.238039 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.238418 kubelet[2094]: E0710 00:36:02.238234 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.239022 kubelet[2094]: W0710 00:36:02.238243 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.239022 kubelet[2094]: E0710 00:36:02.238252 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.239022 kubelet[2094]: E0710 00:36:02.238468 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.239022 kubelet[2094]: W0710 00:36:02.238486 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.239022 kubelet[2094]: E0710 00:36:02.238496 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.241034 kubelet[2094]: E0710 00:36:02.241004 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.241034 kubelet[2094]: W0710 00:36:02.241029 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.241137 kubelet[2094]: E0710 00:36:02.241049 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.247534 kubelet[2094]: E0710 00:36:02.247507 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.247534 kubelet[2094]: W0710 00:36:02.247531 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.247634 kubelet[2094]: E0710 00:36:02.247551 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.258507 env[1315]: time="2025-07-10T00:36:02.258454039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54d46dd7d7-gbr6s,Uid:73728375-b670-4065-833b-6783b7185986,Namespace:calico-system,Attempt:0,} returns sandbox id \"bc8d51f80119d37d8d1d508989aa211889002b43662be9260cd3455f5475df9f\"" Jul 10 00:36:02.259014 kubelet[2094]: E0710 00:36:02.258991 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:02.260781 env[1315]: time="2025-07-10T00:36:02.260755165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 10 00:36:02.328465 kubelet[2094]: E0710 00:36:02.326385 2094 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fxvwp" podUID="d132b5af-8e1a-4884-a0af-6e4f358a849a" Jul 10 00:36:02.328840 kubelet[2094]: E0710 00:36:02.328807 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.328840 kubelet[2094]: W0710 00:36:02.328836 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.328934 kubelet[2094]: E0710 00:36:02.328862 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.329241 kubelet[2094]: E0710 00:36:02.329217 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.329241 kubelet[2094]: W0710 00:36:02.329241 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.329333 kubelet[2094]: E0710 00:36:02.329255 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.329541 kubelet[2094]: E0710 00:36:02.329521 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.329541 kubelet[2094]: W0710 00:36:02.329539 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.329638 kubelet[2094]: E0710 00:36:02.329550 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.329872 kubelet[2094]: E0710 00:36:02.329852 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.329872 kubelet[2094]: W0710 00:36:02.329867 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.329948 kubelet[2094]: E0710 00:36:02.329878 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.330130 kubelet[2094]: E0710 00:36:02.330110 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.330130 kubelet[2094]: W0710 00:36:02.330124 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.330130 kubelet[2094]: E0710 00:36:02.330134 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.330369 kubelet[2094]: E0710 00:36:02.330351 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.330369 kubelet[2094]: W0710 00:36:02.330366 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.330456 kubelet[2094]: E0710 00:36:02.330392 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.330613 kubelet[2094]: E0710 00:36:02.330593 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.330613 kubelet[2094]: W0710 00:36:02.330607 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.330613 kubelet[2094]: E0710 00:36:02.330616 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.330837 kubelet[2094]: E0710 00:36:02.330805 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.330837 kubelet[2094]: W0710 00:36:02.330828 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.330837 kubelet[2094]: E0710 00:36:02.330838 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.331094 kubelet[2094]: E0710 00:36:02.331078 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.331094 kubelet[2094]: W0710 00:36:02.331091 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.331185 kubelet[2094]: E0710 00:36:02.331100 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.331325 kubelet[2094]: E0710 00:36:02.331304 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.331325 kubelet[2094]: W0710 00:36:02.331317 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.331396 kubelet[2094]: E0710 00:36:02.331326 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.331541 kubelet[2094]: E0710 00:36:02.331517 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.331541 kubelet[2094]: W0710 00:36:02.331531 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.331541 kubelet[2094]: E0710 00:36:02.331542 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.331764 kubelet[2094]: E0710 00:36:02.331746 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.331764 kubelet[2094]: W0710 00:36:02.331760 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.331831 kubelet[2094]: E0710 00:36:02.331769 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.332037 kubelet[2094]: E0710 00:36:02.332020 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.332037 kubelet[2094]: W0710 00:36:02.332034 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.332114 kubelet[2094]: E0710 00:36:02.332045 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.332247 kubelet[2094]: E0710 00:36:02.332232 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.332247 kubelet[2094]: W0710 00:36:02.332243 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.332326 kubelet[2094]: E0710 00:36:02.332251 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.338350 kubelet[2094]: E0710 00:36:02.338321 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.338350 kubelet[2094]: W0710 00:36:02.338345 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.338532 kubelet[2094]: E0710 00:36:02.338365 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.339375 kubelet[2094]: E0710 00:36:02.339302 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.339506 kubelet[2094]: W0710 00:36:02.339390 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.339506 kubelet[2094]: E0710 00:36:02.339426 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.339824 kubelet[2094]: E0710 00:36:02.339794 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.339824 kubelet[2094]: W0710 00:36:02.339812 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.339824 kubelet[2094]: E0710 00:36:02.339824 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.340051 kubelet[2094]: E0710 00:36:02.340027 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.340051 kubelet[2094]: W0710 00:36:02.340043 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.340051 kubelet[2094]: E0710 00:36:02.340053 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.340308 kubelet[2094]: E0710 00:36:02.340275 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.340308 kubelet[2094]: W0710 00:36:02.340293 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.340308 kubelet[2094]: E0710 00:36:02.340306 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.344318 kubelet[2094]: E0710 00:36:02.344289 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.344621 kubelet[2094]: W0710 00:36:02.344556 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.344685 kubelet[2094]: E0710 00:36:02.344643 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.345349 kubelet[2094]: E0710 00:36:02.345314 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.345349 kubelet[2094]: W0710 00:36:02.345347 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.345478 kubelet[2094]: E0710 00:36:02.345362 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.345478 kubelet[2094]: I0710 00:36:02.345387 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d132b5af-8e1a-4884-a0af-6e4f358a849a-kubelet-dir\") pod \"csi-node-driver-fxvwp\" (UID: \"d132b5af-8e1a-4884-a0af-6e4f358a849a\") " pod="calico-system/csi-node-driver-fxvwp" Jul 10 00:36:02.345763 kubelet[2094]: E0710 00:36:02.345737 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.345804 kubelet[2094]: W0710 00:36:02.345777 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.345804 kubelet[2094]: E0710 00:36:02.345792 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.345859 kubelet[2094]: I0710 00:36:02.345811 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d132b5af-8e1a-4884-a0af-6e4f358a849a-varrun\") pod \"csi-node-driver-fxvwp\" (UID: \"d132b5af-8e1a-4884-a0af-6e4f358a849a\") " pod="calico-system/csi-node-driver-fxvwp" Jul 10 00:36:02.346051 kubelet[2094]: E0710 00:36:02.346035 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.346051 kubelet[2094]: W0710 00:36:02.346050 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.346119 kubelet[2094]: E0710 00:36:02.346060 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.346119 kubelet[2094]: I0710 00:36:02.346092 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d132b5af-8e1a-4884-a0af-6e4f358a849a-registration-dir\") pod \"csi-node-driver-fxvwp\" (UID: \"d132b5af-8e1a-4884-a0af-6e4f358a849a\") " pod="calico-system/csi-node-driver-fxvwp" Jul 10 00:36:02.346290 kubelet[2094]: E0710 00:36:02.346276 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.346330 kubelet[2094]: W0710 00:36:02.346289 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.346330 kubelet[2094]: E0710 00:36:02.346310 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.346330 kubelet[2094]: I0710 00:36:02.346325 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d132b5af-8e1a-4884-a0af-6e4f358a849a-socket-dir\") pod \"csi-node-driver-fxvwp\" (UID: \"d132b5af-8e1a-4884-a0af-6e4f358a849a\") " pod="calico-system/csi-node-driver-fxvwp" Jul 10 00:36:02.346574 kubelet[2094]: E0710 00:36:02.346558 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.346635 kubelet[2094]: W0710 00:36:02.346582 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.346635 kubelet[2094]: E0710 00:36:02.346597 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.346694 kubelet[2094]: I0710 00:36:02.346625 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smv8t\" (UniqueName: \"kubernetes.io/projected/d132b5af-8e1a-4884-a0af-6e4f358a849a-kube-api-access-smv8t\") pod \"csi-node-driver-fxvwp\" (UID: \"d132b5af-8e1a-4884-a0af-6e4f358a849a\") " pod="calico-system/csi-node-driver-fxvwp" Jul 10 00:36:02.347621 kubelet[2094]: E0710 00:36:02.347561 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.347621 kubelet[2094]: W0710 00:36:02.347618 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.347986 kubelet[2094]: E0710 00:36:02.347962 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.348716 kubelet[2094]: E0710 00:36:02.348359 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.348716 kubelet[2094]: W0710 00:36:02.348377 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.348716 kubelet[2094]: E0710 00:36:02.348426 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.348716 kubelet[2094]: E0710 00:36:02.348701 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.348716 kubelet[2094]: W0710 00:36:02.348711 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.348882 kubelet[2094]: E0710 00:36:02.348773 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.349162 kubelet[2094]: E0710 00:36:02.348989 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.349162 kubelet[2094]: W0710 00:36:02.349005 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.349162 kubelet[2094]: E0710 00:36:02.349066 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.349809 kubelet[2094]: E0710 00:36:02.349790 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.349859 kubelet[2094]: W0710 00:36:02.349809 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.349897 kubelet[2094]: E0710 00:36:02.349859 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.350024 kubelet[2094]: E0710 00:36:02.350012 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.350024 kubelet[2094]: W0710 00:36:02.350024 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.350101 kubelet[2094]: E0710 00:36:02.350033 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.350875 kubelet[2094]: E0710 00:36:02.350249 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.350875 kubelet[2094]: W0710 00:36:02.350286 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.350875 kubelet[2094]: E0710 00:36:02.350297 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.350875 kubelet[2094]: E0710 00:36:02.350687 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.350875 kubelet[2094]: W0710 00:36:02.350701 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.350875 kubelet[2094]: E0710 00:36:02.350727 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.351064 kubelet[2094]: E0710 00:36:02.351000 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.351064 kubelet[2094]: W0710 00:36:02.351011 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.351064 kubelet[2094]: E0710 00:36:02.351023 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.351499 kubelet[2094]: E0710 00:36:02.351373 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.351499 kubelet[2094]: W0710 00:36:02.351406 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.351499 kubelet[2094]: E0710 00:36:02.351419 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.386758 env[1315]: time="2025-07-10T00:36:02.386633085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-km5zf,Uid:a2c63a94-5a22-4001-b055-cfc2ddfc5f34,Namespace:calico-system,Attempt:0,}" Jul 10 00:36:02.412476 env[1315]: time="2025-07-10T00:36:02.412074625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:36:02.412476 env[1315]: time="2025-07-10T00:36:02.412203903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:36:02.414481 env[1315]: time="2025-07-10T00:36:02.412232183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:36:02.414481 env[1315]: time="2025-07-10T00:36:02.413082770Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/48ae79c9e47f567624931b859f95f3bd83e900826f80cd34b028716d50d633a7 pid=2620 runtime=io.containerd.runc.v2 Jul 10 00:36:02.448885 kubelet[2094]: E0710 00:36:02.448838 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.448885 kubelet[2094]: W0710 00:36:02.448869 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.448885 kubelet[2094]: E0710 00:36:02.448890 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.449175 kubelet[2094]: E0710 00:36:02.449057 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.449175 kubelet[2094]: W0710 00:36:02.449066 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.449175 kubelet[2094]: E0710 00:36:02.449075 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.449294 kubelet[2094]: E0710 00:36:02.449216 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.449294 kubelet[2094]: W0710 00:36:02.449229 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.449294 kubelet[2094]: E0710 00:36:02.449237 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.449397 kubelet[2094]: E0710 00:36:02.449376 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.449397 kubelet[2094]: W0710 00:36:02.449394 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.449478 kubelet[2094]: E0710 00:36:02.449403 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.449626 kubelet[2094]: E0710 00:36:02.449612 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.449656 kubelet[2094]: W0710 00:36:02.449625 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.449656 kubelet[2094]: E0710 00:36:02.449635 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.450121 kubelet[2094]: E0710 00:36:02.449862 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.450121 kubelet[2094]: W0710 00:36:02.449875 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.451919 kubelet[2094]: E0710 00:36:02.451856 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.452171 kubelet[2094]: E0710 00:36:02.452143 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.452171 kubelet[2094]: W0710 00:36:02.452158 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.452171 kubelet[2094]: E0710 00:36:02.452171 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.454873 kubelet[2094]: E0710 00:36:02.454823 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.454873 kubelet[2094]: W0710 00:36:02.454841 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.454873 kubelet[2094]: E0710 00:36:02.454861 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.457761 kubelet[2094]: E0710 00:36:02.457737 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.457761 kubelet[2094]: W0710 00:36:02.457755 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.457918 kubelet[2094]: E0710 00:36:02.457846 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.458089 kubelet[2094]: E0710 00:36:02.458076 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.458089 kubelet[2094]: W0710 00:36:02.458088 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.458216 kubelet[2094]: E0710 00:36:02.458188 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.458318 kubelet[2094]: E0710 00:36:02.458304 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.458318 kubelet[2094]: W0710 00:36:02.458317 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.458484 kubelet[2094]: E0710 00:36:02.458388 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.458536 kubelet[2094]: E0710 00:36:02.458513 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.458536 kubelet[2094]: W0710 00:36:02.458522 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.458607 kubelet[2094]: E0710 00:36:02.458592 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.458712 kubelet[2094]: E0710 00:36:02.458700 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.458748 kubelet[2094]: W0710 00:36:02.458712 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.458836 kubelet[2094]: E0710 00:36:02.458780 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.458876 kubelet[2094]: E0710 00:36:02.458856 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.458876 kubelet[2094]: W0710 00:36:02.458863 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.458876 kubelet[2094]: E0710 00:36:02.458873 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.459028 kubelet[2094]: E0710 00:36:02.459016 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.459061 kubelet[2094]: W0710 00:36:02.459028 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.459061 kubelet[2094]: E0710 00:36:02.459039 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.459221 kubelet[2094]: E0710 00:36:02.459210 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.459260 kubelet[2094]: W0710 00:36:02.459221 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.459260 kubelet[2094]: E0710 00:36:02.459230 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.459389 kubelet[2094]: E0710 00:36:02.459379 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.459389 kubelet[2094]: W0710 00:36:02.459389 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.459485 kubelet[2094]: E0710 00:36:02.459400 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.460074 kubelet[2094]: E0710 00:36:02.460021 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.460074 kubelet[2094]: W0710 00:36:02.460040 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.460167 kubelet[2094]: E0710 00:36:02.460081 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.460365 kubelet[2094]: E0710 00:36:02.460349 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.460365 kubelet[2094]: W0710 00:36:02.460361 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.460541 kubelet[2094]: E0710 00:36:02.460397 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.460615 kubelet[2094]: E0710 00:36:02.460600 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.460615 kubelet[2094]: W0710 00:36:02.460613 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.460786 kubelet[2094]: E0710 00:36:02.460677 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.460992 kubelet[2094]: E0710 00:36:02.460917 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.460992 kubelet[2094]: W0710 00:36:02.460928 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.460992 kubelet[2094]: E0710 00:36:02.460939 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.465645 kubelet[2094]: E0710 00:36:02.465596 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.465645 kubelet[2094]: W0710 00:36:02.465620 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.465645 kubelet[2094]: E0710 00:36:02.465639 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.466005 kubelet[2094]: E0710 00:36:02.465925 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.466005 kubelet[2094]: W0710 00:36:02.465940 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.466005 kubelet[2094]: E0710 00:36:02.465950 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.466278 kubelet[2094]: E0710 00:36:02.466248 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.466278 kubelet[2094]: W0710 00:36:02.466264 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.466344 kubelet[2094]: E0710 00:36:02.466275 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.466559 kubelet[2094]: E0710 00:36:02.466511 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.466559 kubelet[2094]: W0710 00:36:02.466536 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.466559 kubelet[2094]: E0710 00:36:02.466547 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.470130 kubelet[2094]: E0710 00:36:02.470099 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:02.470130 kubelet[2094]: W0710 00:36:02.470117 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:02.470130 kubelet[2094]: E0710 00:36:02.470130 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:02.497095 env[1315]: time="2025-07-10T00:36:02.497053556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-km5zf,Uid:a2c63a94-5a22-4001-b055-cfc2ddfc5f34,Namespace:calico-system,Attempt:0,} returns sandbox id \"48ae79c9e47f567624931b859f95f3bd83e900826f80cd34b028716d50d633a7\"" Jul 10 00:36:02.895000 audit[2682]: NETFILTER_CFG table=filter:97 family=2 entries=20 op=nft_register_rule pid=2682 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:02.895000 audit[2682]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffcdc6c3a0 a2=0 a3=1 items=0 ppid=2205 pid=2682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:02.895000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:02.900000 audit[2682]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2682 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:02.900000 audit[2682]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcdc6c3a0 a2=0 a3=1 items=0 ppid=2205 pid=2682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:02.900000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:03.038844 systemd[1]: run-containerd-runc-k8s.io-bc8d51f80119d37d8d1d508989aa211889002b43662be9260cd3455f5475df9f-runc.3XsNLm.mount: Deactivated successfully. Jul 10 00:36:04.060238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1621850213.mount: Deactivated successfully. Jul 10 00:36:04.210752 kubelet[2094]: E0710 00:36:04.210657 2094 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fxvwp" podUID="d132b5af-8e1a-4884-a0af-6e4f358a849a" Jul 10 00:36:04.792327 env[1315]: time="2025-07-10T00:36:04.792270033Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:04.793900 env[1315]: time="2025-07-10T00:36:04.793859411Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:04.795820 env[1315]: time="2025-07-10T00:36:04.795784145Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:04.797272 env[1315]: time="2025-07-10T00:36:04.797245085Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:04.797692 env[1315]: time="2025-07-10T00:36:04.797664519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 10 00:36:04.799065 env[1315]: time="2025-07-10T00:36:04.799034741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 10 00:36:04.816149 env[1315]: time="2025-07-10T00:36:04.816102429Z" level=info msg="CreateContainer within sandbox \"bc8d51f80119d37d8d1d508989aa211889002b43662be9260cd3455f5475df9f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 10 00:36:04.826910 env[1315]: time="2025-07-10T00:36:04.826759844Z" level=info msg="CreateContainer within sandbox \"bc8d51f80119d37d8d1d508989aa211889002b43662be9260cd3455f5475df9f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ecbd64eb02979acd6084832cf6aa7644a456e0d206b80348c0b52fac8d55d0ec\"" Jul 10 00:36:04.827508 env[1315]: time="2025-07-10T00:36:04.827338716Z" level=info msg="StartContainer for \"ecbd64eb02979acd6084832cf6aa7644a456e0d206b80348c0b52fac8d55d0ec\"" Jul 10 00:36:04.931830 env[1315]: time="2025-07-10T00:36:04.930577712Z" level=info msg="StartContainer for \"ecbd64eb02979acd6084832cf6aa7644a456e0d206b80348c0b52fac8d55d0ec\" returns successfully" Jul 10 00:36:05.253801 kubelet[2094]: E0710 00:36:05.253761 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:05.268669 kubelet[2094]: E0710 00:36:05.268621 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.268669 kubelet[2094]: W0710 00:36:05.268649 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.268669 kubelet[2094]: E0710 00:36:05.268670 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:05.268917 kubelet[2094]: E0710 00:36:05.268893 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.268917 kubelet[2094]: W0710 00:36:05.268909 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.268986 kubelet[2094]: E0710 00:36:05.268920 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:05.269175 kubelet[2094]: E0710 00:36:05.269155 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.269175 kubelet[2094]: W0710 00:36:05.269173 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.269241 kubelet[2094]: E0710 00:36:05.269204 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:05.269541 kubelet[2094]: E0710 00:36:05.269522 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.269541 kubelet[2094]: W0710 00:36:05.269540 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.269631 kubelet[2094]: E0710 00:36:05.269553 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:05.269752 kubelet[2094]: E0710 00:36:05.269737 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.269752 kubelet[2094]: W0710 00:36:05.269750 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.269809 kubelet[2094]: E0710 00:36:05.269759 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:05.269900 kubelet[2094]: E0710 00:36:05.269889 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.269930 kubelet[2094]: W0710 00:36:05.269900 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.269930 kubelet[2094]: E0710 00:36:05.269908 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:05.270040 kubelet[2094]: E0710 00:36:05.270029 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.270067 kubelet[2094]: W0710 00:36:05.270039 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.270067 kubelet[2094]: E0710 00:36:05.270047 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:05.270185 kubelet[2094]: E0710 00:36:05.270173 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.270185 kubelet[2094]: W0710 00:36:05.270184 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.270240 kubelet[2094]: E0710 00:36:05.270192 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:05.270351 kubelet[2094]: E0710 00:36:05.270340 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.270378 kubelet[2094]: W0710 00:36:05.270351 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.270378 kubelet[2094]: E0710 00:36:05.270358 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:05.270596 kubelet[2094]: E0710 00:36:05.270578 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.270596 kubelet[2094]: W0710 00:36:05.270595 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.270664 kubelet[2094]: E0710 00:36:05.270606 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:05.270779 kubelet[2094]: E0710 00:36:05.270766 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.270779 kubelet[2094]: W0710 00:36:05.270778 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.270834 kubelet[2094]: E0710 00:36:05.270786 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:05.271817 kubelet[2094]: E0710 00:36:05.271127 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.271817 kubelet[2094]: W0710 00:36:05.271145 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.271817 kubelet[2094]: E0710 00:36:05.271161 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:05.271817 kubelet[2094]: E0710 00:36:05.271413 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.271817 kubelet[2094]: W0710 00:36:05.271424 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.271817 kubelet[2094]: E0710 00:36:05.271444 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:05.271817 kubelet[2094]: E0710 00:36:05.271722 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.271817 kubelet[2094]: W0710 00:36:05.271734 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.271817 kubelet[2094]: E0710 00:36:05.271744 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:05.272903 kubelet[2094]: E0710 00:36:05.272250 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.272903 kubelet[2094]: W0710 00:36:05.272267 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.272903 kubelet[2094]: E0710 00:36:05.272279 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:05.365144 kubelet[2094]: E0710 00:36:05.365104 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.365291 kubelet[2094]: W0710 00:36:05.365151 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.365291 kubelet[2094]: E0710 00:36:05.365174 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:05.365413 kubelet[2094]: E0710 00:36:05.365391 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.365413 kubelet[2094]: W0710 00:36:05.365412 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.365495 kubelet[2094]: E0710 00:36:05.365425 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:05.365710 kubelet[2094]: E0710 00:36:05.365688 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.365710 kubelet[2094]: W0710 00:36:05.365707 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.365789 kubelet[2094]: E0710 00:36:05.365725 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:05.365934 kubelet[2094]: E0710 00:36:05.365920 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.365934 kubelet[2094]: W0710 00:36:05.365932 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.365988 kubelet[2094]: E0710 00:36:05.365947 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:05.366139 kubelet[2094]: E0710 00:36:05.366115 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.366139 kubelet[2094]: W0710 00:36:05.366129 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.366139 kubelet[2094]: E0710 00:36:05.366140 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:05.366637 kubelet[2094]: E0710 00:36:05.366606 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.366637 kubelet[2094]: W0710 00:36:05.366623 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.366637 kubelet[2094]: E0710 00:36:05.366640 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:05.366997 kubelet[2094]: E0710 00:36:05.366979 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.366997 kubelet[2094]: W0710 00:36:05.366995 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.367167 kubelet[2094]: E0710 00:36:05.367058 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:05.367211 kubelet[2094]: E0710 00:36:05.367194 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.367211 kubelet[2094]: W0710 00:36:05.367205 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.367303 kubelet[2094]: E0710 00:36:05.367286 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:05.367410 kubelet[2094]: E0710 00:36:05.367391 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.367455 kubelet[2094]: W0710 00:36:05.367413 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.367455 kubelet[2094]: E0710 00:36:05.367425 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:05.367612 kubelet[2094]: E0710 00:36:05.367597 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.367612 kubelet[2094]: W0710 00:36:05.367610 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.367694 kubelet[2094]: E0710 00:36:05.367619 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:05.367798 kubelet[2094]: E0710 00:36:05.367785 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.367798 kubelet[2094]: W0710 00:36:05.367797 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.367861 kubelet[2094]: E0710 00:36:05.367806 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:05.367990 kubelet[2094]: E0710 00:36:05.367977 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.368016 kubelet[2094]: W0710 00:36:05.367990 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.368016 kubelet[2094]: E0710 00:36:05.367999 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:05.368439 kubelet[2094]: E0710 00:36:05.368417 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.368481 kubelet[2094]: W0710 00:36:05.368440 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.368481 kubelet[2094]: E0710 00:36:05.368468 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:05.368709 kubelet[2094]: E0710 00:36:05.368689 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.368741 kubelet[2094]: W0710 00:36:05.368709 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.368741 kubelet[2094]: E0710 00:36:05.368720 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:05.368947 kubelet[2094]: E0710 00:36:05.368933 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.368978 kubelet[2094]: W0710 00:36:05.368947 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.368978 kubelet[2094]: E0710 00:36:05.368958 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:05.369169 kubelet[2094]: E0710 00:36:05.369154 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.369169 kubelet[2094]: W0710 00:36:05.369167 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.369231 kubelet[2094]: E0710 00:36:05.369176 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:05.369824 kubelet[2094]: E0710 00:36:05.369481 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.369824 kubelet[2094]: W0710 00:36:05.369497 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.369824 kubelet[2094]: E0710 00:36:05.369508 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:05.370488 kubelet[2094]: E0710 00:36:05.369966 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:05.370488 kubelet[2094]: W0710 00:36:05.369980 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:05.370488 kubelet[2094]: E0710 00:36:05.369993 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.203116 kubelet[2094]: E0710 00:36:06.202769 2094 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fxvwp" podUID="d132b5af-8e1a-4884-a0af-6e4f358a849a" Jul 10 00:36:06.254950 kubelet[2094]: I0710 00:36:06.254906 2094 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:36:06.255350 kubelet[2094]: E0710 00:36:06.255244 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:06.278713 kubelet[2094]: E0710 00:36:06.278674 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.278713 kubelet[2094]: W0710 00:36:06.278697 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.278713 kubelet[2094]: E0710 00:36:06.278716 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.278891 kubelet[2094]: E0710 00:36:06.278861 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.278891 kubelet[2094]: W0710 00:36:06.278870 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.278891 kubelet[2094]: E0710 00:36:06.278878 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.279039 kubelet[2094]: E0710 00:36:06.279013 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.279039 kubelet[2094]: W0710 00:36:06.279024 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.279039 kubelet[2094]: E0710 00:36:06.279033 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.279187 kubelet[2094]: E0710 00:36:06.279168 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.279187 kubelet[2094]: W0710 00:36:06.279179 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.279187 kubelet[2094]: E0710 00:36:06.279188 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.279340 kubelet[2094]: E0710 00:36:06.279318 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.279340 kubelet[2094]: W0710 00:36:06.279329 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.279340 kubelet[2094]: E0710 00:36:06.279337 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.279494 kubelet[2094]: E0710 00:36:06.279481 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.279494 kubelet[2094]: W0710 00:36:06.279492 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.279556 kubelet[2094]: E0710 00:36:06.279500 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.279655 kubelet[2094]: E0710 00:36:06.279631 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.279655 kubelet[2094]: W0710 00:36:06.279642 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.279655 kubelet[2094]: E0710 00:36:06.279651 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.279782 kubelet[2094]: E0710 00:36:06.279772 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.279809 kubelet[2094]: W0710 00:36:06.279781 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.279809 kubelet[2094]: E0710 00:36:06.279789 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.279926 kubelet[2094]: E0710 00:36:06.279916 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.279926 kubelet[2094]: W0710 00:36:06.279925 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.279977 kubelet[2094]: E0710 00:36:06.279935 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.280062 kubelet[2094]: E0710 00:36:06.280052 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.280086 kubelet[2094]: W0710 00:36:06.280061 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.280086 kubelet[2094]: E0710 00:36:06.280069 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.280200 kubelet[2094]: E0710 00:36:06.280190 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.280224 kubelet[2094]: W0710 00:36:06.280201 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.280224 kubelet[2094]: E0710 00:36:06.280209 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.280346 kubelet[2094]: E0710 00:36:06.280337 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.280370 kubelet[2094]: W0710 00:36:06.280346 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.280370 kubelet[2094]: E0710 00:36:06.280354 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.280505 kubelet[2094]: E0710 00:36:06.280494 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.280505 kubelet[2094]: W0710 00:36:06.280504 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.280561 kubelet[2094]: E0710 00:36:06.280512 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.280649 kubelet[2094]: E0710 00:36:06.280640 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.280676 kubelet[2094]: W0710 00:36:06.280649 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.280676 kubelet[2094]: E0710 00:36:06.280656 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.280784 kubelet[2094]: E0710 00:36:06.280774 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.280812 kubelet[2094]: W0710 00:36:06.280783 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.280812 kubelet[2094]: E0710 00:36:06.280791 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.289230 env[1315]: time="2025-07-10T00:36:06.289195106Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:06.290676 env[1315]: time="2025-07-10T00:36:06.290646928Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:06.292092 env[1315]: time="2025-07-10T00:36:06.292064591Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:06.293234 env[1315]: time="2025-07-10T00:36:06.293199377Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:06.293880 env[1315]: time="2025-07-10T00:36:06.293854329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 10 00:36:06.296264 env[1315]: time="2025-07-10T00:36:06.296228299Z" level=info msg="CreateContainer within sandbox \"48ae79c9e47f567624931b859f95f3bd83e900826f80cd34b028716d50d633a7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 10 00:36:06.306335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2276690765.mount: Deactivated successfully. Jul 10 00:36:06.309248 env[1315]: time="2025-07-10T00:36:06.309190978Z" level=info msg="CreateContainer within sandbox \"48ae79c9e47f567624931b859f95f3bd83e900826f80cd34b028716d50d633a7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1b92d3067963a82131513367dfc30e4e6b755332db085a9f53d59595c51fddfd\"" Jul 10 00:36:06.311303 env[1315]: time="2025-07-10T00:36:06.311232473Z" level=info msg="StartContainer for \"1b92d3067963a82131513367dfc30e4e6b755332db085a9f53d59595c51fddfd\"" Jul 10 00:36:06.373341 kubelet[2094]: E0710 00:36:06.373306 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.373600 kubelet[2094]: W0710 00:36:06.373581 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.373687 kubelet[2094]: E0710 00:36:06.373672 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.373974 kubelet[2094]: E0710 00:36:06.373955 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.374071 kubelet[2094]: W0710 00:36:06.374056 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.374148 kubelet[2094]: E0710 00:36:06.374136 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.374501 kubelet[2094]: E0710 00:36:06.374487 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.374600 kubelet[2094]: W0710 00:36:06.374586 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.374690 kubelet[2094]: E0710 00:36:06.374679 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.375014 kubelet[2094]: E0710 00:36:06.375000 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.375125 kubelet[2094]: W0710 00:36:06.375110 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.375278 kubelet[2094]: E0710 00:36:06.375248 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.375470 kubelet[2094]: E0710 00:36:06.375458 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.375601 kubelet[2094]: W0710 00:36:06.375581 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.375771 kubelet[2094]: E0710 00:36:06.375752 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.376007 kubelet[2094]: E0710 00:36:06.375996 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.376082 kubelet[2094]: W0710 00:36:06.376071 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.376214 kubelet[2094]: E0710 00:36:06.376193 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.376482 kubelet[2094]: E0710 00:36:06.376470 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.376629 kubelet[2094]: W0710 00:36:06.376596 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.376739 kubelet[2094]: E0710 00:36:06.376725 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.376982 kubelet[2094]: E0710 00:36:06.376970 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.377059 kubelet[2094]: W0710 00:36:06.377046 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.377163 kubelet[2094]: E0710 00:36:06.377143 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.377424 kubelet[2094]: E0710 00:36:06.377398 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.377525 kubelet[2094]: W0710 00:36:06.377510 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.377630 kubelet[2094]: E0710 00:36:06.377613 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.377804 kubelet[2094]: E0710 00:36:06.377792 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.377876 kubelet[2094]: W0710 00:36:06.377863 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.377998 kubelet[2094]: E0710 00:36:06.377979 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.378250 kubelet[2094]: E0710 00:36:06.378236 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.378322 kubelet[2094]: W0710 00:36:06.378309 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.378407 kubelet[2094]: E0710 00:36:06.378387 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.379237 kubelet[2094]: E0710 00:36:06.379221 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.379329 kubelet[2094]: W0710 00:36:06.379316 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.379445 kubelet[2094]: E0710 00:36:06.379415 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.379664 kubelet[2094]: E0710 00:36:06.379650 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.379740 kubelet[2094]: W0710 00:36:06.379727 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.379882 kubelet[2094]: E0710 00:36:06.379864 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.380020 kubelet[2094]: E0710 00:36:06.380008 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.380086 kubelet[2094]: W0710 00:36:06.380073 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.380148 kubelet[2094]: E0710 00:36:06.380137 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.380410 kubelet[2094]: E0710 00:36:06.380396 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.380551 kubelet[2094]: W0710 00:36:06.380537 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.380624 kubelet[2094]: E0710 00:36:06.380614 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.381233 kubelet[2094]: E0710 00:36:06.381219 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.381345 kubelet[2094]: W0710 00:36:06.381332 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.381445 kubelet[2094]: E0710 00:36:06.381422 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.381650 kubelet[2094]: E0710 00:36:06.381633 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.381696 kubelet[2094]: W0710 00:36:06.381650 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.381696 kubelet[2094]: E0710 00:36:06.381663 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.381978 kubelet[2094]: E0710 00:36:06.381966 2094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:36:06.381978 kubelet[2094]: W0710 00:36:06.381977 2094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:36:06.382048 kubelet[2094]: E0710 00:36:06.381987 2094 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:36:06.391867 env[1315]: time="2025-07-10T00:36:06.391809112Z" level=info msg="StartContainer for \"1b92d3067963a82131513367dfc30e4e6b755332db085a9f53d59595c51fddfd\" returns successfully" Jul 10 00:36:06.459673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b92d3067963a82131513367dfc30e4e6b755332db085a9f53d59595c51fddfd-rootfs.mount: Deactivated successfully. Jul 10 00:36:06.472493 env[1315]: time="2025-07-10T00:36:06.472409431Z" level=info msg="shim disconnected" id=1b92d3067963a82131513367dfc30e4e6b755332db085a9f53d59595c51fddfd Jul 10 00:36:06.472493 env[1315]: time="2025-07-10T00:36:06.472488870Z" level=warning msg="cleaning up after shim disconnected" id=1b92d3067963a82131513367dfc30e4e6b755332db085a9f53d59595c51fddfd namespace=k8s.io Jul 10 00:36:06.472493 env[1315]: time="2025-07-10T00:36:06.472498670Z" level=info msg="cleaning up dead shim" Jul 10 00:36:06.480829 env[1315]: time="2025-07-10T00:36:06.480777447Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:36:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2840 runtime=io.containerd.runc.v2\n" Jul 10 00:36:07.259966 env[1315]: time="2025-07-10T00:36:07.259912911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 10 00:36:07.277674 kubelet[2094]: I0710 00:36:07.277599 2094 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-54d46dd7d7-gbr6s" podStartSLOduration=3.739140248 podStartE2EDuration="6.277581181s" podCreationTimestamp="2025-07-10 00:36:01 +0000 UTC" firstStartedPulling="2025-07-10 00:36:02.26038437 +0000 UTC m=+20.156963431" lastFinishedPulling="2025-07-10 00:36:04.798825303 +0000 UTC m=+22.695404364" observedRunningTime="2025-07-10 00:36:05.273539215 +0000 UTC m=+23.170118276" watchObservedRunningTime="2025-07-10 00:36:07.277581181 +0000 UTC m=+25.174160202" Jul 10 00:36:08.205685 kubelet[2094]: E0710 00:36:08.205635 2094 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fxvwp" podUID="d132b5af-8e1a-4884-a0af-6e4f358a849a" Jul 10 00:36:08.742916 kubelet[2094]: I0710 00:36:08.742856 2094 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:36:08.743294 kubelet[2094]: E0710 00:36:08.743233 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:08.789487 kernel: kauditd_printk_skb: 8 callbacks suppressed Jul 10 00:36:08.789625 kernel: audit: type=1325 audit(1752107768.783:289): table=filter:99 family=2 entries=21 op=nft_register_rule pid=2863 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:08.789652 kernel: audit: type=1300 audit(1752107768.783:289): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd31e5070 a2=0 a3=1 items=0 ppid=2205 pid=2863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:08.783000 audit[2863]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=2863 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:08.783000 audit[2863]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd31e5070 a2=0 a3=1 items=0 ppid=2205 pid=2863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:08.792593 kernel: audit: type=1327 audit(1752107768.783:289): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:08.783000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:08.799000 audit[2863]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=2863 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:08.799000 audit[2863]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffd31e5070 a2=0 a3=1 items=0 ppid=2205 pid=2863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:08.806833 kernel: audit: type=1325 audit(1752107768.799:290): table=nat:100 family=2 entries=19 op=nft_register_chain pid=2863 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:08.806905 kernel: audit: type=1300 audit(1752107768.799:290): arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffd31e5070 a2=0 a3=1 items=0 ppid=2205 pid=2863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:08.806980 kernel: audit: type=1327 audit(1752107768.799:290): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:08.799000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:09.261936 kubelet[2094]: E0710 00:36:09.261907 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:10.203324 kubelet[2094]: E0710 00:36:10.202554 2094 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fxvwp" podUID="d132b5af-8e1a-4884-a0af-6e4f358a849a" Jul 10 00:36:10.250155 env[1315]: time="2025-07-10T00:36:10.250106276Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:10.251873 env[1315]: time="2025-07-10T00:36:10.251838137Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:10.253953 env[1315]: time="2025-07-10T00:36:10.253900796Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:10.256469 env[1315]: time="2025-07-10T00:36:10.256424049Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:10.257193 env[1315]: time="2025-07-10T00:36:10.257158882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 10 00:36:10.259693 env[1315]: time="2025-07-10T00:36:10.259651256Z" level=info msg="CreateContainer within sandbox \"48ae79c9e47f567624931b859f95f3bd83e900826f80cd34b028716d50d633a7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 10 00:36:10.273022 env[1315]: time="2025-07-10T00:36:10.272968996Z" level=info msg="CreateContainer within sandbox \"48ae79c9e47f567624931b859f95f3bd83e900826f80cd34b028716d50d633a7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"01d6f0b9cc7a3c72fdacd711843a22fd27a50c70a96aca24b81793816b28e1f5\"" Jul 10 00:36:10.275037 env[1315]: time="2025-07-10T00:36:10.274992695Z" level=info msg="StartContainer for \"01d6f0b9cc7a3c72fdacd711843a22fd27a50c70a96aca24b81793816b28e1f5\"" Jul 10 00:36:10.357379 env[1315]: time="2025-07-10T00:36:10.357325512Z" level=info msg="StartContainer for \"01d6f0b9cc7a3c72fdacd711843a22fd27a50c70a96aca24b81793816b28e1f5\" returns successfully" Jul 10 00:36:11.023403 env[1315]: time="2025-07-10T00:36:11.023347185Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:36:11.048539 env[1315]: time="2025-07-10T00:36:11.048429092Z" level=info msg="shim disconnected" id=01d6f0b9cc7a3c72fdacd711843a22fd27a50c70a96aca24b81793816b28e1f5 Jul 10 00:36:11.048539 env[1315]: time="2025-07-10T00:36:11.048535451Z" level=warning msg="cleaning up after shim disconnected" id=01d6f0b9cc7a3c72fdacd711843a22fd27a50c70a96aca24b81793816b28e1f5 namespace=k8s.io Jul 10 00:36:11.048863 env[1315]: time="2025-07-10T00:36:11.048551611Z" level=info msg="cleaning up dead shim" Jul 10 00:36:11.056185 env[1315]: time="2025-07-10T00:36:11.056142934Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:36:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2913 runtime=io.containerd.runc.v2\n" Jul 10 00:36:11.117070 kubelet[2094]: I0710 00:36:11.116881 2094 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 10 00:36:11.268796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01d6f0b9cc7a3c72fdacd711843a22fd27a50c70a96aca24b81793816b28e1f5-rootfs.mount: Deactivated successfully. Jul 10 00:36:11.270377 env[1315]: time="2025-07-10T00:36:11.270152101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 10 00:36:11.322211 kubelet[2094]: I0710 00:36:11.322083 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a81984fa-095f-48f3-8dad-bb2f184b8979-whisker-backend-key-pair\") pod \"whisker-666c86c9b9-ksjrb\" (UID: \"a81984fa-095f-48f3-8dad-bb2f184b8979\") " pod="calico-system/whisker-666c86c9b9-ksjrb" Jul 10 00:36:11.322211 kubelet[2094]: I0710 00:36:11.322142 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wjm4\" (UniqueName: \"kubernetes.io/projected/2b825d2b-e0da-4a05-8c42-0b337b179ba3-kube-api-access-6wjm4\") pod \"coredns-7c65d6cfc9-zl9sp\" (UID: \"2b825d2b-e0da-4a05-8c42-0b337b179ba3\") " pod="kube-system/coredns-7c65d6cfc9-zl9sp" Jul 10 00:36:11.322211 kubelet[2094]: I0710 00:36:11.322163 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfqsx\" (UniqueName: \"kubernetes.io/projected/a753c3a5-5f12-42b7-a570-848c45ac60a4-kube-api-access-hfqsx\") pod \"calico-kube-controllers-6946f6c79d-88gpt\" (UID: \"a753c3a5-5f12-42b7-a570-848c45ac60a4\") " pod="calico-system/calico-kube-controllers-6946f6c79d-88gpt" Jul 10 00:36:11.322211 kubelet[2094]: I0710 00:36:11.322184 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b825d2b-e0da-4a05-8c42-0b337b179ba3-config-volume\") pod \"coredns-7c65d6cfc9-zl9sp\" (UID: \"2b825d2b-e0da-4a05-8c42-0b337b179ba3\") " pod="kube-system/coredns-7c65d6cfc9-zl9sp" Jul 10 00:36:11.322211 kubelet[2094]: I0710 00:36:11.322212 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbd8q\" (UniqueName: \"kubernetes.io/projected/96a10c6f-bf44-4fd2-abf3-72068d8168d1-kube-api-access-qbd8q\") pod \"calico-apiserver-866977c98d-2dcmd\" (UID: \"96a10c6f-bf44-4fd2-abf3-72068d8168d1\") " pod="calico-apiserver/calico-apiserver-866977c98d-2dcmd" Jul 10 00:36:11.322863 kubelet[2094]: I0710 00:36:11.322230 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd0d6de4-b13e-4994-9b29-6ae2a2c1a419-config\") pod \"goldmane-58fd7646b9-ltbnn\" (UID: \"fd0d6de4-b13e-4994-9b29-6ae2a2c1a419\") " pod="calico-system/goldmane-58fd7646b9-ltbnn" Jul 10 00:36:11.322863 kubelet[2094]: I0710 00:36:11.322246 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2c85e535-ff6b-4896-a1a8-ba8b4bb49ac5-calico-apiserver-certs\") pod \"calico-apiserver-866977c98d-bvxbn\" (UID: \"2c85e535-ff6b-4896-a1a8-ba8b4bb49ac5\") " pod="calico-apiserver/calico-apiserver-866977c98d-bvxbn" Jul 10 00:36:11.322863 kubelet[2094]: I0710 00:36:11.322260 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a81984fa-095f-48f3-8dad-bb2f184b8979-whisker-ca-bundle\") pod \"whisker-666c86c9b9-ksjrb\" (UID: \"a81984fa-095f-48f3-8dad-bb2f184b8979\") " pod="calico-system/whisker-666c86c9b9-ksjrb" Jul 10 00:36:11.322863 kubelet[2094]: I0710 00:36:11.322281 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrmsj\" (UniqueName: \"kubernetes.io/projected/a81984fa-095f-48f3-8dad-bb2f184b8979-kube-api-access-xrmsj\") pod \"whisker-666c86c9b9-ksjrb\" (UID: \"a81984fa-095f-48f3-8dad-bb2f184b8979\") " pod="calico-system/whisker-666c86c9b9-ksjrb" Jul 10 00:36:11.322863 kubelet[2094]: I0710 00:36:11.322307 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-945v9\" (UniqueName: \"kubernetes.io/projected/fd0d6de4-b13e-4994-9b29-6ae2a2c1a419-kube-api-access-945v9\") pod \"goldmane-58fd7646b9-ltbnn\" (UID: \"fd0d6de4-b13e-4994-9b29-6ae2a2c1a419\") " pod="calico-system/goldmane-58fd7646b9-ltbnn" Jul 10 00:36:11.322996 kubelet[2094]: I0710 00:36:11.322328 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a753c3a5-5f12-42b7-a570-848c45ac60a4-tigera-ca-bundle\") pod \"calico-kube-controllers-6946f6c79d-88gpt\" (UID: \"a753c3a5-5f12-42b7-a570-848c45ac60a4\") " pod="calico-system/calico-kube-controllers-6946f6c79d-88gpt" Jul 10 00:36:11.322996 kubelet[2094]: I0710 00:36:11.322345 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/96a10c6f-bf44-4fd2-abf3-72068d8168d1-calico-apiserver-certs\") pod \"calico-apiserver-866977c98d-2dcmd\" (UID: \"96a10c6f-bf44-4fd2-abf3-72068d8168d1\") " pod="calico-apiserver/calico-apiserver-866977c98d-2dcmd" Jul 10 00:36:11.322996 kubelet[2094]: I0710 00:36:11.322384 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ca43f8d-e9b6-493a-b482-3d2dc7232c75-config-volume\") pod \"coredns-7c65d6cfc9-2plgf\" (UID: \"7ca43f8d-e9b6-493a-b482-3d2dc7232c75\") " pod="kube-system/coredns-7c65d6cfc9-2plgf" Jul 10 00:36:11.322996 kubelet[2094]: I0710 00:36:11.322400 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/fd0d6de4-b13e-4994-9b29-6ae2a2c1a419-goldmane-key-pair\") pod \"goldmane-58fd7646b9-ltbnn\" (UID: \"fd0d6de4-b13e-4994-9b29-6ae2a2c1a419\") " pod="calico-system/goldmane-58fd7646b9-ltbnn" Jul 10 00:36:11.322996 kubelet[2094]: I0710 00:36:11.322415 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7k27\" (UniqueName: \"kubernetes.io/projected/2c85e535-ff6b-4896-a1a8-ba8b4bb49ac5-kube-api-access-m7k27\") pod \"calico-apiserver-866977c98d-bvxbn\" (UID: \"2c85e535-ff6b-4896-a1a8-ba8b4bb49ac5\") " pod="calico-apiserver/calico-apiserver-866977c98d-bvxbn" Jul 10 00:36:11.323155 kubelet[2094]: I0710 00:36:11.322460 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl5bg\" (UniqueName: \"kubernetes.io/projected/7ca43f8d-e9b6-493a-b482-3d2dc7232c75-kube-api-access-zl5bg\") pod \"coredns-7c65d6cfc9-2plgf\" (UID: \"7ca43f8d-e9b6-493a-b482-3d2dc7232c75\") " pod="kube-system/coredns-7c65d6cfc9-2plgf" Jul 10 00:36:11.323155 kubelet[2094]: I0710 00:36:11.322480 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd0d6de4-b13e-4994-9b29-6ae2a2c1a419-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-ltbnn\" (UID: \"fd0d6de4-b13e-4994-9b29-6ae2a2c1a419\") " pod="calico-system/goldmane-58fd7646b9-ltbnn" Jul 10 00:36:11.454742 env[1315]: time="2025-07-10T00:36:11.452887302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-666c86c9b9-ksjrb,Uid:a81984fa-095f-48f3-8dad-bb2f184b8979,Namespace:calico-system,Attempt:0,}" Jul 10 00:36:11.454742 env[1315]: time="2025-07-10T00:36:11.454076650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866977c98d-2dcmd,Uid:96a10c6f-bf44-4fd2-abf3-72068d8168d1,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:36:11.463756 env[1315]: time="2025-07-10T00:36:11.463715633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-ltbnn,Uid:fd0d6de4-b13e-4994-9b29-6ae2a2c1a419,Namespace:calico-system,Attempt:0,}" Jul 10 00:36:11.468445 kubelet[2094]: E0710 00:36:11.468395 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:11.472976 env[1315]: time="2025-07-10T00:36:11.469933611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zl9sp,Uid:2b825d2b-e0da-4a05-8c42-0b337b179ba3,Namespace:kube-system,Attempt:0,}" Jul 10 00:36:11.472976 env[1315]: time="2025-07-10T00:36:11.470572244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2plgf,Uid:7ca43f8d-e9b6-493a-b482-3d2dc7232c75,Namespace:kube-system,Attempt:0,}" Jul 10 00:36:11.472976 env[1315]: time="2025-07-10T00:36:11.470814842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866977c98d-bvxbn,Uid:2c85e535-ff6b-4896-a1a8-ba8b4bb49ac5,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:36:11.473148 kubelet[2094]: E0710 00:36:11.470077 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:11.748447 env[1315]: time="2025-07-10T00:36:11.748381809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6946f6c79d-88gpt,Uid:a753c3a5-5f12-42b7-a570-848c45ac60a4,Namespace:calico-system,Attempt:0,}" Jul 10 00:36:11.801498 env[1315]: time="2025-07-10T00:36:11.801416715Z" level=error msg="Failed to destroy network for sandbox \"d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:11.802004 env[1315]: time="2025-07-10T00:36:11.801967709Z" level=error msg="encountered an error cleaning up failed sandbox \"d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:11.802073 env[1315]: time="2025-07-10T00:36:11.802023829Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-666c86c9b9-ksjrb,Uid:a81984fa-095f-48f3-8dad-bb2f184b8979,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:11.803256 kubelet[2094]: E0710 00:36:11.803187 2094 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:11.803365 kubelet[2094]: E0710 00:36:11.803298 2094 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-666c86c9b9-ksjrb" Jul 10 00:36:11.803365 kubelet[2094]: E0710 00:36:11.803322 2094 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-666c86c9b9-ksjrb" Jul 10 00:36:11.803429 kubelet[2094]: E0710 00:36:11.803368 2094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-666c86c9b9-ksjrb_calico-system(a81984fa-095f-48f3-8dad-bb2f184b8979)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-666c86c9b9-ksjrb_calico-system(a81984fa-095f-48f3-8dad-bb2f184b8979)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-666c86c9b9-ksjrb" podUID="a81984fa-095f-48f3-8dad-bb2f184b8979" Jul 10 00:36:11.813677 env[1315]: time="2025-07-10T00:36:11.813619872Z" level=error msg="Failed to destroy network for sandbox \"182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:11.814049 env[1315]: time="2025-07-10T00:36:11.814016268Z" level=error msg="encountered an error cleaning up failed sandbox \"182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:11.814096 env[1315]: time="2025-07-10T00:36:11.814073188Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866977c98d-2dcmd,Uid:96a10c6f-bf44-4fd2-abf3-72068d8168d1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:11.814324 kubelet[2094]: E0710 00:36:11.814287 2094 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:11.814379 kubelet[2094]: E0710 00:36:11.814349 2094 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-866977c98d-2dcmd" Jul 10 00:36:11.814379 kubelet[2094]: E0710 00:36:11.814370 2094 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-866977c98d-2dcmd" Jul 10 00:36:11.814469 kubelet[2094]: E0710 00:36:11.814412 2094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-866977c98d-2dcmd_calico-apiserver(96a10c6f-bf44-4fd2-abf3-72068d8168d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-866977c98d-2dcmd_calico-apiserver(96a10c6f-bf44-4fd2-abf3-72068d8168d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-866977c98d-2dcmd" podUID="96a10c6f-bf44-4fd2-abf3-72068d8168d1" Jul 10 00:36:11.835100 env[1315]: time="2025-07-10T00:36:11.835030617Z" level=error msg="Failed to destroy network for sandbox \"386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:11.835467 env[1315]: time="2025-07-10T00:36:11.835421973Z" level=error msg="encountered an error cleaning up failed sandbox \"386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:11.835530 env[1315]: time="2025-07-10T00:36:11.835486572Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zl9sp,Uid:2b825d2b-e0da-4a05-8c42-0b337b179ba3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:11.835761 kubelet[2094]: E0710 00:36:11.835714 2094 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:11.835832 kubelet[2094]: E0710 00:36:11.835784 2094 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-zl9sp" Jul 10 00:36:11.835832 kubelet[2094]: E0710 00:36:11.835808 2094 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-zl9sp" Jul 10 00:36:11.835898 kubelet[2094]: E0710 00:36:11.835846 2094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-zl9sp_kube-system(2b825d2b-e0da-4a05-8c42-0b337b179ba3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-zl9sp_kube-system(2b825d2b-e0da-4a05-8c42-0b337b179ba3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-zl9sp" podUID="2b825d2b-e0da-4a05-8c42-0b337b179ba3" Jul 10 00:36:11.839658 env[1315]: time="2025-07-10T00:36:11.839600531Z" level=error msg="Failed to destroy network for sandbox \"c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:11.840004 env[1315]: time="2025-07-10T00:36:11.839960767Z" level=error msg="encountered an error cleaning up failed sandbox \"c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:11.840076 env[1315]: time="2025-07-10T00:36:11.840009847Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2plgf,Uid:7ca43f8d-e9b6-493a-b482-3d2dc7232c75,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:11.840218 kubelet[2094]: E0710 00:36:11.840178 2094 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:11.840278 kubelet[2094]: E0710 00:36:11.840234 2094 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2plgf" Jul 10 00:36:11.840278 kubelet[2094]: E0710 00:36:11.840256 2094 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2plgf" Jul 10 00:36:11.840346 kubelet[2094]: E0710 00:36:11.840301 2094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-2plgf_kube-system(7ca43f8d-e9b6-493a-b482-3d2dc7232c75)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-2plgf_kube-system(7ca43f8d-e9b6-493a-b482-3d2dc7232c75)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-2plgf" podUID="7ca43f8d-e9b6-493a-b482-3d2dc7232c75" Jul 10 00:36:11.850024 env[1315]: time="2025-07-10T00:36:11.849973066Z" level=error msg="Failed to destroy network for sandbox \"4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:11.850578 env[1315]: time="2025-07-10T00:36:11.850537701Z" level=error msg="encountered an error cleaning up failed sandbox \"4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:11.850713 env[1315]: time="2025-07-10T00:36:11.850684059Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-ltbnn,Uid:fd0d6de4-b13e-4994-9b29-6ae2a2c1a419,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:11.851006 kubelet[2094]: E0710 00:36:11.850972 2094 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:11.851076 kubelet[2094]: E0710 00:36:11.851032 2094 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-ltbnn" Jul 10 00:36:11.851076 kubelet[2094]: E0710 00:36:11.851051 2094 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-ltbnn" Jul 10 00:36:11.851145 kubelet[2094]: E0710 00:36:11.851085 2094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-ltbnn_calico-system(fd0d6de4-b13e-4994-9b29-6ae2a2c1a419)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-ltbnn_calico-system(fd0d6de4-b13e-4994-9b29-6ae2a2c1a419)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-ltbnn" podUID="fd0d6de4-b13e-4994-9b29-6ae2a2c1a419" Jul 10 00:36:11.851484 env[1315]: time="2025-07-10T00:36:11.851429972Z" level=error msg="Failed to destroy network for sandbox \"9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:11.851811 env[1315]: time="2025-07-10T00:36:11.851759448Z" level=error msg="encountered an error cleaning up failed sandbox \"9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:11.851858 env[1315]: time="2025-07-10T00:36:11.851806488Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6946f6c79d-88gpt,Uid:a753c3a5-5f12-42b7-a570-848c45ac60a4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:11.852178 kubelet[2094]: E0710 00:36:11.851945 2094 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:11.852220 kubelet[2094]: E0710 00:36:11.852196 2094 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6946f6c79d-88gpt" Jul 10 00:36:11.852220 kubelet[2094]: E0710 00:36:11.852211 2094 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6946f6c79d-88gpt" Jul 10 00:36:11.852276 kubelet[2094]: E0710 00:36:11.852245 2094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6946f6c79d-88gpt_calico-system(a753c3a5-5f12-42b7-a570-848c45ac60a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6946f6c79d-88gpt_calico-system(a753c3a5-5f12-42b7-a570-848c45ac60a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6946f6c79d-88gpt" podUID="a753c3a5-5f12-42b7-a570-848c45ac60a4" Jul 10 00:36:11.862230 env[1315]: time="2025-07-10T00:36:11.862162784Z" level=error msg="Failed to destroy network for sandbox \"99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:11.862563 env[1315]: time="2025-07-10T00:36:11.862524740Z" level=error msg="encountered an error cleaning up failed sandbox \"99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:11.862605 env[1315]: time="2025-07-10T00:36:11.862581219Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866977c98d-bvxbn,Uid:2c85e535-ff6b-4896-a1a8-ba8b4bb49ac5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:11.862784 kubelet[2094]: E0710 00:36:11.862739 2094 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:11.862838 kubelet[2094]: E0710 00:36:11.862790 2094 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-866977c98d-bvxbn" Jul 10 00:36:11.862838 kubelet[2094]: E0710 00:36:11.862812 2094 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-866977c98d-bvxbn" Jul 10 00:36:11.862890 kubelet[2094]: E0710 00:36:11.862845 2094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-866977c98d-bvxbn_calico-apiserver(2c85e535-ff6b-4896-a1a8-ba8b4bb49ac5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-866977c98d-bvxbn_calico-apiserver(2c85e535-ff6b-4896-a1a8-ba8b4bb49ac5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-866977c98d-bvxbn" podUID="2c85e535-ff6b-4896-a1a8-ba8b4bb49ac5" Jul 10 00:36:12.207453 env[1315]: time="2025-07-10T00:36:12.207400989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxvwp,Uid:d132b5af-8e1a-4884-a0af-6e4f358a849a,Namespace:calico-system,Attempt:0,}" Jul 10 00:36:12.259350 env[1315]: time="2025-07-10T00:36:12.259289087Z" level=error msg="Failed to destroy network for sandbox \"1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:12.259851 env[1315]: time="2025-07-10T00:36:12.259818602Z" level=error msg="encountered an error cleaning up failed sandbox \"1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:12.259969 env[1315]: time="2025-07-10T00:36:12.259940961Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxvwp,Uid:d132b5af-8e1a-4884-a0af-6e4f358a849a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:12.260239 kubelet[2094]: E0710 00:36:12.260202 2094 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:12.260318 kubelet[2094]: E0710 00:36:12.260262 2094 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fxvwp" Jul 10 00:36:12.260318 kubelet[2094]: E0710 00:36:12.260290 2094 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fxvwp" Jul 10 00:36:12.260387 kubelet[2094]: E0710 00:36:12.260329 2094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fxvwp_calico-system(d132b5af-8e1a-4884-a0af-6e4f358a849a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fxvwp_calico-system(d132b5af-8e1a-4884-a0af-6e4f358a849a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fxvwp" podUID="d132b5af-8e1a-4884-a0af-6e4f358a849a" Jul 10 00:36:12.269399 kubelet[2094]: I0710 00:36:12.269372 2094 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" Jul 10 00:36:12.270221 env[1315]: time="2025-07-10T00:36:12.270173542Z" level=info msg="StopPodSandbox for \"d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6\"" Jul 10 00:36:12.276581 kubelet[2094]: I0710 00:36:12.276551 2094 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" Jul 10 00:36:12.277256 env[1315]: time="2025-07-10T00:36:12.277231874Z" level=info msg="StopPodSandbox for \"182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0\"" Jul 10 00:36:12.278540 kubelet[2094]: I0710 00:36:12.278516 2094 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" Jul 10 00:36:12.279189 env[1315]: time="2025-07-10T00:36:12.279154175Z" level=info msg="StopPodSandbox for \"1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198\"" Jul 10 00:36:12.279763 kubelet[2094]: I0710 00:36:12.279646 2094 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" Jul 10 00:36:12.280421 env[1315]: time="2025-07-10T00:36:12.280225925Z" level=info msg="StopPodSandbox for \"9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464\"" Jul 10 00:36:12.281991 kubelet[2094]: I0710 00:36:12.281955 2094 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" Jul 10 00:36:12.282617 env[1315]: time="2025-07-10T00:36:12.282575262Z" level=info msg="StopPodSandbox for \"4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9\"" Jul 10 00:36:12.283221 kubelet[2094]: I0710 00:36:12.283194 2094 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" Jul 10 00:36:12.283681 env[1315]: time="2025-07-10T00:36:12.283633852Z" level=info msg="StopPodSandbox for \"99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86\"" Jul 10 00:36:12.289129 kubelet[2094]: I0710 00:36:12.289102 2094 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" Jul 10 00:36:12.289765 env[1315]: time="2025-07-10T00:36:12.289720753Z" level=info msg="StopPodSandbox for \"c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77\"" Jul 10 00:36:12.292932 kubelet[2094]: I0710 00:36:12.292902 2094 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" Jul 10 00:36:12.293508 env[1315]: time="2025-07-10T00:36:12.293424597Z" level=info msg="StopPodSandbox for \"386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2\"" Jul 10 00:36:12.315073 env[1315]: time="2025-07-10T00:36:12.315020268Z" level=error msg="StopPodSandbox for \"d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6\" failed" error="failed to destroy network for sandbox \"d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:12.315419 kubelet[2094]: E0710 00:36:12.315382 2094 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" Jul 10 00:36:12.315508 kubelet[2094]: E0710 00:36:12.315466 2094 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6"} Jul 10 00:36:12.315537 kubelet[2094]: E0710 00:36:12.315523 2094 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a81984fa-095f-48f3-8dad-bb2f184b8979\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:36:12.315589 kubelet[2094]: E0710 00:36:12.315545 2094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a81984fa-095f-48f3-8dad-bb2f184b8979\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-666c86c9b9-ksjrb" podUID="a81984fa-095f-48f3-8dad-bb2f184b8979" Jul 10 00:36:12.324480 env[1315]: time="2025-07-10T00:36:12.324389057Z" level=error msg="StopPodSandbox for \"182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0\" failed" error="failed to destroy network for sandbox \"182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:12.325237 kubelet[2094]: E0710 00:36:12.325153 2094 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" Jul 10 00:36:12.325553 kubelet[2094]: E0710 00:36:12.325245 2094 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0"} Jul 10 00:36:12.325553 kubelet[2094]: E0710 00:36:12.325286 2094 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"96a10c6f-bf44-4fd2-abf3-72068d8168d1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:36:12.325553 kubelet[2094]: E0710 00:36:12.325309 2094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"96a10c6f-bf44-4fd2-abf3-72068d8168d1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-866977c98d-2dcmd" podUID="96a10c6f-bf44-4fd2-abf3-72068d8168d1" Jul 10 00:36:12.335927 env[1315]: time="2025-07-10T00:36:12.335878866Z" level=error msg="StopPodSandbox for \"9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464\" failed" error="failed to destroy network for sandbox \"9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:12.336116 kubelet[2094]: E0710 00:36:12.336080 2094 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" Jul 10 00:36:12.336190 kubelet[2094]: E0710 00:36:12.336128 2094 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464"} Jul 10 00:36:12.336190 kubelet[2094]: E0710 00:36:12.336163 2094 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a753c3a5-5f12-42b7-a570-848c45ac60a4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:36:12.336286 kubelet[2094]: E0710 00:36:12.336183 2094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a753c3a5-5f12-42b7-a570-848c45ac60a4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6946f6c79d-88gpt" podUID="a753c3a5-5f12-42b7-a570-848c45ac60a4" Jul 10 00:36:12.350752 env[1315]: time="2025-07-10T00:36:12.350691483Z" level=error msg="StopPodSandbox for \"1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198\" failed" error="failed to destroy network for sandbox \"1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:12.351134 kubelet[2094]: E0710 00:36:12.351095 2094 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" Jul 10 00:36:12.351205 kubelet[2094]: E0710 00:36:12.351148 2094 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198"} Jul 10 00:36:12.351246 kubelet[2094]: E0710 00:36:12.351234 2094 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d132b5af-8e1a-4884-a0af-6e4f358a849a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:36:12.351355 kubelet[2094]: E0710 00:36:12.351259 2094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d132b5af-8e1a-4884-a0af-6e4f358a849a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fxvwp" podUID="d132b5af-8e1a-4884-a0af-6e4f358a849a" Jul 10 00:36:12.354137 env[1315]: time="2025-07-10T00:36:12.354093770Z" level=error msg="StopPodSandbox for \"4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9\" failed" error="failed to destroy network for sandbox \"4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:12.354319 kubelet[2094]: E0710 00:36:12.354288 2094 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" Jul 10 00:36:12.354366 kubelet[2094]: E0710 00:36:12.354328 2094 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9"} Jul 10 00:36:12.354418 kubelet[2094]: E0710 00:36:12.354355 2094 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fd0d6de4-b13e-4994-9b29-6ae2a2c1a419\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:36:12.354418 kubelet[2094]: E0710 00:36:12.354387 2094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fd0d6de4-b13e-4994-9b29-6ae2a2c1a419\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-ltbnn" podUID="fd0d6de4-b13e-4994-9b29-6ae2a2c1a419" Jul 10 00:36:12.355560 env[1315]: time="2025-07-10T00:36:12.355510716Z" level=error msg="StopPodSandbox for \"386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2\" failed" error="failed to destroy network for sandbox \"386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:12.355699 kubelet[2094]: E0710 00:36:12.355670 2094 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" Jul 10 00:36:12.355750 kubelet[2094]: E0710 00:36:12.355703 2094 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2"} Jul 10 00:36:12.355750 kubelet[2094]: E0710 00:36:12.355729 2094 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2b825d2b-e0da-4a05-8c42-0b337b179ba3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:36:12.355835 kubelet[2094]: E0710 00:36:12.355749 2094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2b825d2b-e0da-4a05-8c42-0b337b179ba3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-zl9sp" podUID="2b825d2b-e0da-4a05-8c42-0b337b179ba3" Jul 10 00:36:12.362329 env[1315]: time="2025-07-10T00:36:12.362283851Z" level=error msg="StopPodSandbox for \"c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77\" failed" error="failed to destroy network for sandbox \"c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:12.362504 kubelet[2094]: E0710 00:36:12.362468 2094 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" Jul 10 00:36:12.362558 kubelet[2094]: E0710 00:36:12.362510 2094 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77"} Jul 10 00:36:12.362558 kubelet[2094]: E0710 00:36:12.362535 2094 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ca43f8d-e9b6-493a-b482-3d2dc7232c75\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:36:12.362643 kubelet[2094]: E0710 00:36:12.362554 2094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ca43f8d-e9b6-493a-b482-3d2dc7232c75\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-2plgf" podUID="7ca43f8d-e9b6-493a-b482-3d2dc7232c75" Jul 10 00:36:12.364892 env[1315]: time="2025-07-10T00:36:12.364850306Z" level=error msg="StopPodSandbox for \"99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86\" failed" error="failed to destroy network for sandbox \"99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:36:12.365041 kubelet[2094]: E0710 00:36:12.364996 2094 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" Jul 10 00:36:12.365041 kubelet[2094]: E0710 00:36:12.365037 2094 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86"} Jul 10 00:36:12.365128 kubelet[2094]: E0710 00:36:12.365061 2094 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2c85e535-ff6b-4896-a1a8-ba8b4bb49ac5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:36:12.365128 kubelet[2094]: E0710 00:36:12.365080 2094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2c85e535-ff6b-4896-a1a8-ba8b4bb49ac5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-866977c98d-bvxbn" podUID="2c85e535-ff6b-4896-a1a8-ba8b4bb49ac5" Jul 10 00:36:16.770359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3290769813.mount: Deactivated successfully. Jul 10 00:36:16.997145 env[1315]: time="2025-07-10T00:36:16.997096160Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:17.001145 env[1315]: time="2025-07-10T00:36:17.001098767Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:17.007117 env[1315]: time="2025-07-10T00:36:17.007058319Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:17.008723 env[1315]: time="2025-07-10T00:36:17.008683666Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:17.009316 env[1315]: time="2025-07-10T00:36:17.009287981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 10 00:36:17.022734 env[1315]: time="2025-07-10T00:36:17.022635753Z" level=info msg="CreateContainer within sandbox \"48ae79c9e47f567624931b859f95f3bd83e900826f80cd34b028716d50d633a7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 10 00:36:17.040050 env[1315]: time="2025-07-10T00:36:17.039955293Z" level=info msg="CreateContainer within sandbox \"48ae79c9e47f567624931b859f95f3bd83e900826f80cd34b028716d50d633a7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e46f071d032893f99d27d752fb23cf9f2cff7644979c2115796411dfd015505d\"" Jul 10 00:36:17.040748 env[1315]: time="2025-07-10T00:36:17.040557608Z" level=info msg="StartContainer for \"e46f071d032893f99d27d752fb23cf9f2cff7644979c2115796411dfd015505d\"" Jul 10 00:36:17.175160 env[1315]: time="2025-07-10T00:36:17.175109962Z" level=info msg="StartContainer for \"e46f071d032893f99d27d752fb23cf9f2cff7644979c2115796411dfd015505d\" returns successfully" Jul 10 00:36:17.319135 kubelet[2094]: I0710 00:36:17.318993 2094 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-km5zf" podStartSLOduration=0.808202902 podStartE2EDuration="15.318978719s" podCreationTimestamp="2025-07-10 00:36:02 +0000 UTC" firstStartedPulling="2025-07-10 00:36:02.500114191 +0000 UTC m=+20.396693252" lastFinishedPulling="2025-07-10 00:36:17.010890008 +0000 UTC m=+34.907469069" observedRunningTime="2025-07-10 00:36:17.318631202 +0000 UTC m=+35.215210263" watchObservedRunningTime="2025-07-10 00:36:17.318978719 +0000 UTC m=+35.215557740" Jul 10 00:36:17.324332 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 10 00:36:17.324481 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 10 00:36:17.417179 env[1315]: time="2025-07-10T00:36:17.416954328Z" level=info msg="StopPodSandbox for \"d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6\"" Jul 10 00:36:17.685975 env[1315]: 2025-07-10 00:36:17.546 [INFO][3414] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" Jul 10 00:36:17.685975 env[1315]: 2025-07-10 00:36:17.547 [INFO][3414] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" iface="eth0" netns="/var/run/netns/cni-585f6a31-351f-3548-fcac-be2efdf5fcd2" Jul 10 00:36:17.685975 env[1315]: 2025-07-10 00:36:17.548 [INFO][3414] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" iface="eth0" netns="/var/run/netns/cni-585f6a31-351f-3548-fcac-be2efdf5fcd2" Jul 10 00:36:17.685975 env[1315]: 2025-07-10 00:36:17.549 [INFO][3414] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" iface="eth0" netns="/var/run/netns/cni-585f6a31-351f-3548-fcac-be2efdf5fcd2" Jul 10 00:36:17.685975 env[1315]: 2025-07-10 00:36:17.549 [INFO][3414] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" Jul 10 00:36:17.685975 env[1315]: 2025-07-10 00:36:17.549 [INFO][3414] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" Jul 10 00:36:17.685975 env[1315]: 2025-07-10 00:36:17.667 [INFO][3424] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" HandleID="k8s-pod-network.d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" Workload="localhost-k8s-whisker--666c86c9b9--ksjrb-eth0" Jul 10 00:36:17.685975 env[1315]: 2025-07-10 00:36:17.667 [INFO][3424] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:36:17.685975 env[1315]: 2025-07-10 00:36:17.667 [INFO][3424] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:36:17.685975 env[1315]: 2025-07-10 00:36:17.679 [WARNING][3424] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" HandleID="k8s-pod-network.d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" Workload="localhost-k8s-whisker--666c86c9b9--ksjrb-eth0" Jul 10 00:36:17.685975 env[1315]: 2025-07-10 00:36:17.679 [INFO][3424] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" HandleID="k8s-pod-network.d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" Workload="localhost-k8s-whisker--666c86c9b9--ksjrb-eth0" Jul 10 00:36:17.685975 env[1315]: 2025-07-10 00:36:17.681 [INFO][3424] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:36:17.685975 env[1315]: 2025-07-10 00:36:17.684 [INFO][3414] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" Jul 10 00:36:17.686400 env[1315]: time="2025-07-10T00:36:17.686036914Z" level=info msg="TearDown network for sandbox \"d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6\" successfully" Jul 10 00:36:17.686400 env[1315]: time="2025-07-10T00:36:17.686067034Z" level=info msg="StopPodSandbox for \"d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6\" returns successfully" Jul 10 00:36:17.770724 kubelet[2094]: I0710 00:36:17.770676 2094 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrmsj\" (UniqueName: \"kubernetes.io/projected/a81984fa-095f-48f3-8dad-bb2f184b8979-kube-api-access-xrmsj\") pod \"a81984fa-095f-48f3-8dad-bb2f184b8979\" (UID: \"a81984fa-095f-48f3-8dad-bb2f184b8979\") " Jul 10 00:36:17.770888 kubelet[2094]: I0710 00:36:17.770737 2094 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a81984fa-095f-48f3-8dad-bb2f184b8979-whisker-ca-bundle\") pod \"a81984fa-095f-48f3-8dad-bb2f184b8979\" (UID: \"a81984fa-095f-48f3-8dad-bb2f184b8979\") " Jul 10 00:36:17.770888 kubelet[2094]: I0710 00:36:17.770763 2094 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a81984fa-095f-48f3-8dad-bb2f184b8979-whisker-backend-key-pair\") pod \"a81984fa-095f-48f3-8dad-bb2f184b8979\" (UID: \"a81984fa-095f-48f3-8dad-bb2f184b8979\") " Jul 10 00:36:17.771474 systemd[1]: run-netns-cni\x2d585f6a31\x2d351f\x2d3548\x2dfcac\x2dbe2efdf5fcd2.mount: Deactivated successfully. Jul 10 00:36:17.774674 kubelet[2094]: I0710 00:36:17.774628 2094 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a81984fa-095f-48f3-8dad-bb2f184b8979-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a81984fa-095f-48f3-8dad-bb2f184b8979" (UID: "a81984fa-095f-48f3-8dad-bb2f184b8979"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 10 00:36:17.778016 systemd[1]: var-lib-kubelet-pods-a81984fa\x2d095f\x2d48f3\x2d8dad\x2dbb2f184b8979-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxrmsj.mount: Deactivated successfully. Jul 10 00:36:17.778159 systemd[1]: var-lib-kubelet-pods-a81984fa\x2d095f\x2d48f3\x2d8dad\x2dbb2f184b8979-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 10 00:36:17.779308 kubelet[2094]: I0710 00:36:17.779267 2094 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a81984fa-095f-48f3-8dad-bb2f184b8979-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a81984fa-095f-48f3-8dad-bb2f184b8979" (UID: "a81984fa-095f-48f3-8dad-bb2f184b8979"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 10 00:36:17.779384 kubelet[2094]: I0710 00:36:17.779364 2094 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a81984fa-095f-48f3-8dad-bb2f184b8979-kube-api-access-xrmsj" (OuterVolumeSpecName: "kube-api-access-xrmsj") pod "a81984fa-095f-48f3-8dad-bb2f184b8979" (UID: "a81984fa-095f-48f3-8dad-bb2f184b8979"). InnerVolumeSpecName "kube-api-access-xrmsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 00:36:17.871748 kubelet[2094]: I0710 00:36:17.871688 2094 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a81984fa-095f-48f3-8dad-bb2f184b8979-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:17.871748 kubelet[2094]: I0710 00:36:17.871724 2094 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrmsj\" (UniqueName: \"kubernetes.io/projected/a81984fa-095f-48f3-8dad-bb2f184b8979-kube-api-access-xrmsj\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:17.871748 kubelet[2094]: I0710 00:36:17.871744 2094 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a81984fa-095f-48f3-8dad-bb2f184b8979-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:18.306026 kubelet[2094]: I0710 00:36:18.305984 2094 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:36:18.474454 kubelet[2094]: I0710 00:36:18.474388 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnj44\" (UniqueName: \"kubernetes.io/projected/d93e4554-3abc-467c-b37b-cea4f4d6e9ff-kube-api-access-jnj44\") pod \"whisker-5688bcc687-8k7vk\" (UID: \"d93e4554-3abc-467c-b37b-cea4f4d6e9ff\") " pod="calico-system/whisker-5688bcc687-8k7vk" Jul 10 00:36:18.474895 kubelet[2094]: I0710 00:36:18.474874 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d93e4554-3abc-467c-b37b-cea4f4d6e9ff-whisker-ca-bundle\") pod \"whisker-5688bcc687-8k7vk\" (UID: \"d93e4554-3abc-467c-b37b-cea4f4d6e9ff\") " pod="calico-system/whisker-5688bcc687-8k7vk" Jul 10 00:36:18.475055 kubelet[2094]: I0710 00:36:18.475038 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d93e4554-3abc-467c-b37b-cea4f4d6e9ff-whisker-backend-key-pair\") pod \"whisker-5688bcc687-8k7vk\" (UID: \"d93e4554-3abc-467c-b37b-cea4f4d6e9ff\") " pod="calico-system/whisker-5688bcc687-8k7vk" Jul 10 00:36:18.661035 env[1315]: time="2025-07-10T00:36:18.660917253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5688bcc687-8k7vk,Uid:d93e4554-3abc-467c-b37b-cea4f4d6e9ff,Namespace:calico-system,Attempt:0,}" Jul 10 00:36:18.740000 audit[3508]: AVC avc: denied { write } for pid=3508 comm="tee" name="fd" dev="proc" ino=18255 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 00:36:18.740000 audit[3508]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc05987d2 a2=241 a3=1b6 items=1 ppid=3474 pid=3508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:18.748468 kernel: audit: type=1400 audit(1752107778.740:291): avc: denied { write } for pid=3508 comm="tee" name="fd" dev="proc" ino=18255 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 00:36:18.748626 kernel: audit: type=1300 audit(1752107778.740:291): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc05987d2 a2=241 a3=1b6 items=1 ppid=3474 pid=3508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:18.748667 kernel: audit: type=1307 audit(1752107778.740:291): cwd="/etc/service/enabled/node-status-reporter/log" Jul 10 00:36:18.740000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jul 10 00:36:18.740000 audit: PATH item=0 name="/dev/fd/63" inode=18248 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:36:18.752773 kernel: audit: type=1302 audit(1752107778.740:291): item=0 name="/dev/fd/63" inode=18248 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:36:18.740000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 10 00:36:18.755213 kernel: audit: type=1327 audit(1752107778.740:291): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 10 00:36:18.771457 kernel: audit: type=1400 audit(1752107778.751:292): avc: denied { write } for pid=3524 comm="tee" name="fd" dev="proc" ino=18271 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 00:36:18.751000 audit[3524]: AVC avc: denied { write } for pid=3524 comm="tee" name="fd" dev="proc" ino=18271 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 00:36:18.751000 audit[3524]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc0ffe7e1 a2=241 a3=1b6 items=1 ppid=3476 pid=3524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:18.784306 kernel: audit: type=1300 audit(1752107778.751:292): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc0ffe7e1 a2=241 a3=1b6 items=1 ppid=3476 pid=3524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:18.784398 kernel: audit: type=1307 audit(1752107778.751:292): cwd="/etc/service/enabled/felix/log" Jul 10 00:36:18.751000 audit: CWD cwd="/etc/service/enabled/felix/log" Jul 10 00:36:18.751000 audit: PATH item=0 name="/dev/fd/63" inode=18265 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:36:18.787833 kernel: audit: type=1302 audit(1752107778.751:292): item=0 name="/dev/fd/63" inode=18265 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:36:18.751000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 10 00:36:18.794419 kernel: audit: type=1327 audit(1752107778.751:292): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 10 00:36:18.754000 audit[3518]: AVC avc: denied { write } for pid=3518 comm="tee" name="fd" dev="proc" ino=18275 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 00:36:18.754000 audit[3518]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd09c57e1 a2=241 a3=1b6 items=1 ppid=3480 pid=3518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:18.754000 audit: CWD cwd="/etc/service/enabled/confd/log" Jul 10 00:36:18.754000 audit: PATH item=0 name="/dev/fd/63" inode=19613 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:36:18.754000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 10 00:36:18.755000 audit[3531]: AVC avc: denied { write } for pid=3531 comm="tee" name="fd" dev="proc" ino=18279 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 00:36:18.755000 audit[3531]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff54237d1 a2=241 a3=1b6 items=1 ppid=3496 pid=3531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:18.755000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jul 10 00:36:18.755000 audit: PATH item=0 name="/dev/fd/63" inode=18268 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:36:18.755000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 10 00:36:18.764000 audit[3547]: AVC avc: denied { write } for pid=3547 comm="tee" name="fd" dev="proc" ino=18287 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 00:36:18.764000 audit[3547]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe11057e1 a2=241 a3=1b6 items=1 ppid=3482 pid=3547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:18.764000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jul 10 00:36:18.764000 audit: PATH item=0 name="/dev/fd/63" inode=19295 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:36:18.764000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 10 00:36:18.813000 audit[3556]: AVC avc: denied { write } for pid=3556 comm="tee" name="fd" dev="proc" ino=18295 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 00:36:18.813000 audit[3556]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc8e8d7e2 a2=241 a3=1b6 items=1 ppid=3471 pid=3556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:18.813000 audit: CWD cwd="/etc/service/enabled/bird/log" Jul 10 00:36:18.813000 audit: PATH item=0 name="/dev/fd/63" inode=19618 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:36:18.813000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 10 00:36:18.816000 audit[3553]: AVC avc: denied { write } for pid=3553 comm="tee" name="fd" dev="proc" ino=19621 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 00:36:18.816000 audit[3553]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe252d7e3 a2=241 a3=1b6 items=1 ppid=3470 pid=3553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:18.816000 audit: CWD cwd="/etc/service/enabled/cni/log" Jul 10 00:36:18.816000 audit: PATH item=0 name="/dev/fd/63" inode=19298 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:36:18.816000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 10 00:36:18.877557 systemd-networkd[1098]: califfcfb36d15a: Link UP Jul 10 00:36:18.879909 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 10 00:36:18.879975 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): califfcfb36d15a: link becomes ready Jul 10 00:36:18.880043 systemd-networkd[1098]: califfcfb36d15a: Gained carrier Jul 10 00:36:18.905274 env[1315]: 2025-07-10 00:36:18.692 [INFO][3448] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:36:18.905274 env[1315]: 2025-07-10 00:36:18.706 [INFO][3448] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5688bcc687--8k7vk-eth0 whisker-5688bcc687- calico-system d93e4554-3abc-467c-b37b-cea4f4d6e9ff 943 0 2025-07-10 00:36:18 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5688bcc687 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5688bcc687-8k7vk eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] califfcfb36d15a [] [] }} ContainerID="8f60d0e65f7c8e6c0903b12252769c5e5f2ac0f18d68e72fc6495ca2d9958735" Namespace="calico-system" Pod="whisker-5688bcc687-8k7vk" WorkloadEndpoint="localhost-k8s-whisker--5688bcc687--8k7vk-" Jul 10 00:36:18.905274 env[1315]: 2025-07-10 00:36:18.706 [INFO][3448] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8f60d0e65f7c8e6c0903b12252769c5e5f2ac0f18d68e72fc6495ca2d9958735" Namespace="calico-system" Pod="whisker-5688bcc687-8k7vk" WorkloadEndpoint="localhost-k8s-whisker--5688bcc687--8k7vk-eth0" Jul 10 00:36:18.905274 env[1315]: 2025-07-10 00:36:18.802 [INFO][3485] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f60d0e65f7c8e6c0903b12252769c5e5f2ac0f18d68e72fc6495ca2d9958735" HandleID="k8s-pod-network.8f60d0e65f7c8e6c0903b12252769c5e5f2ac0f18d68e72fc6495ca2d9958735" Workload="localhost-k8s-whisker--5688bcc687--8k7vk-eth0" Jul 10 00:36:18.905274 env[1315]: 2025-07-10 00:36:18.804 [INFO][3485] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8f60d0e65f7c8e6c0903b12252769c5e5f2ac0f18d68e72fc6495ca2d9958735" HandleID="k8s-pod-network.8f60d0e65f7c8e6c0903b12252769c5e5f2ac0f18d68e72fc6495ca2d9958735" Workload="localhost-k8s-whisker--5688bcc687--8k7vk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd8a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5688bcc687-8k7vk", "timestamp":"2025-07-10 00:36:18.802106189 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:36:18.905274 env[1315]: 2025-07-10 00:36:18.804 [INFO][3485] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:36:18.905274 env[1315]: 2025-07-10 00:36:18.804 [INFO][3485] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:36:18.905274 env[1315]: 2025-07-10 00:36:18.804 [INFO][3485] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:36:18.905274 env[1315]: 2025-07-10 00:36:18.815 [INFO][3485] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8f60d0e65f7c8e6c0903b12252769c5e5f2ac0f18d68e72fc6495ca2d9958735" host="localhost" Jul 10 00:36:18.905274 env[1315]: 2025-07-10 00:36:18.823 [INFO][3485] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:36:18.905274 env[1315]: 2025-07-10 00:36:18.832 [INFO][3485] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:36:18.905274 env[1315]: 2025-07-10 00:36:18.846 [INFO][3485] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:36:18.905274 env[1315]: 2025-07-10 00:36:18.848 [INFO][3485] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:36:18.905274 env[1315]: 2025-07-10 00:36:18.848 [INFO][3485] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8f60d0e65f7c8e6c0903b12252769c5e5f2ac0f18d68e72fc6495ca2d9958735" host="localhost" Jul 10 00:36:18.905274 env[1315]: 2025-07-10 00:36:18.851 [INFO][3485] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8f60d0e65f7c8e6c0903b12252769c5e5f2ac0f18d68e72fc6495ca2d9958735 Jul 10 00:36:18.905274 env[1315]: 2025-07-10 00:36:18.855 [INFO][3485] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8f60d0e65f7c8e6c0903b12252769c5e5f2ac0f18d68e72fc6495ca2d9958735" host="localhost" Jul 10 00:36:18.905274 env[1315]: 2025-07-10 00:36:18.861 [INFO][3485] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.8f60d0e65f7c8e6c0903b12252769c5e5f2ac0f18d68e72fc6495ca2d9958735" host="localhost" Jul 10 00:36:18.905274 env[1315]: 2025-07-10 00:36:18.861 [INFO][3485] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.8f60d0e65f7c8e6c0903b12252769c5e5f2ac0f18d68e72fc6495ca2d9958735" host="localhost" Jul 10 00:36:18.905274 env[1315]: 2025-07-10 00:36:18.861 [INFO][3485] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:36:18.905274 env[1315]: 2025-07-10 00:36:18.861 [INFO][3485] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="8f60d0e65f7c8e6c0903b12252769c5e5f2ac0f18d68e72fc6495ca2d9958735" HandleID="k8s-pod-network.8f60d0e65f7c8e6c0903b12252769c5e5f2ac0f18d68e72fc6495ca2d9958735" Workload="localhost-k8s-whisker--5688bcc687--8k7vk-eth0" Jul 10 00:36:18.906047 env[1315]: 2025-07-10 00:36:18.863 [INFO][3448] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8f60d0e65f7c8e6c0903b12252769c5e5f2ac0f18d68e72fc6495ca2d9958735" Namespace="calico-system" Pod="whisker-5688bcc687-8k7vk" WorkloadEndpoint="localhost-k8s-whisker--5688bcc687--8k7vk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5688bcc687--8k7vk-eth0", GenerateName:"whisker-5688bcc687-", Namespace:"calico-system", SelfLink:"", UID:"d93e4554-3abc-467c-b37b-cea4f4d6e9ff", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 36, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5688bcc687", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5688bcc687-8k7vk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califfcfb36d15a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:36:18.906047 env[1315]: 2025-07-10 00:36:18.863 [INFO][3448] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="8f60d0e65f7c8e6c0903b12252769c5e5f2ac0f18d68e72fc6495ca2d9958735" Namespace="calico-system" Pod="whisker-5688bcc687-8k7vk" WorkloadEndpoint="localhost-k8s-whisker--5688bcc687--8k7vk-eth0" Jul 10 00:36:18.906047 env[1315]: 2025-07-10 00:36:18.863 [INFO][3448] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califfcfb36d15a ContainerID="8f60d0e65f7c8e6c0903b12252769c5e5f2ac0f18d68e72fc6495ca2d9958735" Namespace="calico-system" Pod="whisker-5688bcc687-8k7vk" WorkloadEndpoint="localhost-k8s-whisker--5688bcc687--8k7vk-eth0" Jul 10 00:36:18.906047 env[1315]: 2025-07-10 00:36:18.889 [INFO][3448] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f60d0e65f7c8e6c0903b12252769c5e5f2ac0f18d68e72fc6495ca2d9958735" Namespace="calico-system" Pod="whisker-5688bcc687-8k7vk" WorkloadEndpoint="localhost-k8s-whisker--5688bcc687--8k7vk-eth0" Jul 10 00:36:18.906047 env[1315]: 2025-07-10 00:36:18.890 [INFO][3448] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8f60d0e65f7c8e6c0903b12252769c5e5f2ac0f18d68e72fc6495ca2d9958735" Namespace="calico-system" Pod="whisker-5688bcc687-8k7vk" WorkloadEndpoint="localhost-k8s-whisker--5688bcc687--8k7vk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5688bcc687--8k7vk-eth0", GenerateName:"whisker-5688bcc687-", Namespace:"calico-system", SelfLink:"", UID:"d93e4554-3abc-467c-b37b-cea4f4d6e9ff", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 36, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5688bcc687", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8f60d0e65f7c8e6c0903b12252769c5e5f2ac0f18d68e72fc6495ca2d9958735", Pod:"whisker-5688bcc687-8k7vk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califfcfb36d15a", MAC:"86:91:e4:42:ce:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:36:18.906047 env[1315]: 2025-07-10 00:36:18.899 [INFO][3448] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8f60d0e65f7c8e6c0903b12252769c5e5f2ac0f18d68e72fc6495ca2d9958735" Namespace="calico-system" Pod="whisker-5688bcc687-8k7vk" WorkloadEndpoint="localhost-k8s-whisker--5688bcc687--8k7vk-eth0" Jul 10 00:36:18.932355 env[1315]: time="2025-07-10T00:36:18.927586288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:36:18.932355 env[1315]: time="2025-07-10T00:36:18.927631928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:36:18.932355 env[1315]: time="2025-07-10T00:36:18.927642488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:36:18.932355 env[1315]: time="2025-07-10T00:36:18.927791647Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f60d0e65f7c8e6c0903b12252769c5e5f2ac0f18d68e72fc6495ca2d9958735 pid=3584 runtime=io.containerd.runc.v2 Jul 10 00:36:19.000902 systemd-resolved[1236]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:36:19.025000 audit[3646]: AVC avc: denied { bpf } for pid=3646 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.025000 audit[3646]: AVC avc: denied { bpf } for pid=3646 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.025000 audit[3646]: AVC avc: denied { perfmon } for pid=3646 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.025000 audit[3646]: AVC avc: denied { perfmon } for pid=3646 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.025000 audit[3646]: AVC avc: denied { perfmon } for pid=3646 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.025000 audit[3646]: AVC avc: denied { perfmon } for pid=3646 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.025000 audit[3646]: AVC avc: denied { perfmon } for pid=3646 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.025000 audit[3646]: AVC avc: denied { bpf } for pid=3646 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.025000 audit[3646]: AVC avc: denied { bpf } for pid=3646 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.025000 audit: BPF prog-id=10 op=LOAD Jul 10 00:36:19.025000 audit[3646]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe27d7998 a2=98 a3=ffffe27d7988 items=0 ppid=3478 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.025000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 10 00:36:19.025000 audit: BPF prog-id=10 op=UNLOAD Jul 10 00:36:19.026000 audit[3646]: AVC avc: denied { bpf } for pid=3646 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.026000 audit[3646]: AVC avc: denied { bpf } for pid=3646 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.026000 audit[3646]: AVC avc: denied { perfmon } for pid=3646 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.026000 audit[3646]: AVC avc: denied { perfmon } for pid=3646 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.026000 audit[3646]: AVC avc: denied { perfmon } for pid=3646 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.026000 audit[3646]: AVC avc: denied { perfmon } for pid=3646 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.026000 audit[3646]: AVC avc: denied { perfmon } for pid=3646 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.026000 audit[3646]: AVC avc: denied { bpf } for pid=3646 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.026000 audit[3646]: AVC avc: denied { bpf } for pid=3646 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.026000 audit: BPF prog-id=11 op=LOAD Jul 10 00:36:19.026000 audit[3646]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe27d7848 a2=74 a3=95 items=0 ppid=3478 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.026000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 10 00:36:19.026000 audit: BPF prog-id=11 op=UNLOAD Jul 10 00:36:19.026000 audit[3646]: AVC avc: denied { bpf } for pid=3646 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.026000 audit[3646]: AVC avc: denied { bpf } for pid=3646 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.026000 audit[3646]: AVC avc: denied { perfmon } for pid=3646 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.026000 audit[3646]: AVC avc: denied { perfmon } for pid=3646 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.026000 audit[3646]: AVC avc: denied { perfmon } for pid=3646 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.026000 audit[3646]: AVC avc: denied { perfmon } for pid=3646 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.026000 audit[3646]: AVC avc: denied { perfmon } for pid=3646 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.026000 audit[3646]: AVC avc: denied { bpf } for pid=3646 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.026000 audit[3646]: AVC avc: denied { bpf } for pid=3646 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.026000 audit: BPF prog-id=12 op=LOAD Jul 10 00:36:19.026000 audit[3646]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe27d7878 a2=40 a3=ffffe27d78a8 items=0 ppid=3478 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.026000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 10 00:36:19.026000 audit: BPF prog-id=12 op=UNLOAD Jul 10 00:36:19.026000 audit[3646]: AVC avc: denied { perfmon } for pid=3646 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.026000 audit[3646]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=0 a1=ffffe27d7990 a2=50 a3=0 items=0 ppid=3478 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.026000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 10 00:36:19.028319 env[1315]: time="2025-07-10T00:36:19.028262828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5688bcc687-8k7vk,Uid:d93e4554-3abc-467c-b37b-cea4f4d6e9ff,Namespace:calico-system,Attempt:0,} returns sandbox id \"8f60d0e65f7c8e6c0903b12252769c5e5f2ac0f18d68e72fc6495ca2d9958735\"" Jul 10 00:36:19.030000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.030000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.030000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.030000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.030000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.030000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.030000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.030000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.030000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.030000 audit: BPF prog-id=13 op=LOAD Jul 10 00:36:19.030000 audit[3647]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd1991d08 a2=98 a3=ffffd1991cf8 items=0 ppid=3478 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.030000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:36:19.030000 audit: BPF prog-id=13 op=UNLOAD Jul 10 00:36:19.030000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.030000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.030000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.030000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.030000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.030000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.030000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.030000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.030000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.030000 audit: BPF prog-id=14 op=LOAD Jul 10 00:36:19.030000 audit[3647]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd1991998 a2=74 a3=95 items=0 ppid=3478 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.030000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:36:19.030000 audit: BPF prog-id=14 op=UNLOAD Jul 10 00:36:19.030000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.030000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.030000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.030000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.030000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.030000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.030000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.030000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.030000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.030000 audit: BPF prog-id=15 op=LOAD Jul 10 00:36:19.030000 audit[3647]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd19919f8 a2=94 a3=2 items=0 ppid=3478 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.030000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:36:19.030000 audit: BPF prog-id=15 op=UNLOAD Jul 10 00:36:19.032778 env[1315]: time="2025-07-10T00:36:19.032647955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 10 00:36:19.131000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.131000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.131000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.131000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.131000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.131000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.131000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.131000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.131000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.131000 audit: BPF prog-id=16 op=LOAD Jul 10 00:36:19.131000 audit[3647]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd19919b8 a2=40 a3=ffffd19919e8 items=0 ppid=3478 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.131000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:36:19.131000 audit: BPF prog-id=16 op=UNLOAD Jul 10 00:36:19.131000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.131000 audit[3647]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffd1991ad0 a2=50 a3=0 items=0 ppid=3478 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.131000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit[3647]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd1991a28 a2=28 a3=ffffd1991b58 items=0 ppid=3478 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.140000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit[3647]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd1991a58 a2=28 a3=ffffd1991b88 items=0 ppid=3478 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.140000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit[3647]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd1991908 a2=28 a3=ffffd1991a38 items=0 ppid=3478 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.140000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit[3647]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd1991a78 a2=28 a3=ffffd1991ba8 items=0 ppid=3478 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.140000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit[3647]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd1991a58 a2=28 a3=ffffd1991b88 items=0 ppid=3478 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.140000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit[3647]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd1991a48 a2=28 a3=ffffd1991b78 items=0 ppid=3478 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.140000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit[3647]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd1991a78 a2=28 a3=ffffd1991ba8 items=0 ppid=3478 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.140000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit[3647]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd1991a58 a2=28 a3=ffffd1991b88 items=0 ppid=3478 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.140000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit[3647]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd1991a78 a2=28 a3=ffffd1991ba8 items=0 ppid=3478 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.140000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit[3647]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd1991a48 a2=28 a3=ffffd1991b78 items=0 ppid=3478 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.140000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit[3647]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd1991ac8 a2=28 a3=ffffd1991c08 items=0 ppid=3478 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.140000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit[3647]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffd1991800 a2=50 a3=0 items=0 ppid=3478 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.140000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit: BPF prog-id=17 op=LOAD Jul 10 00:36:19.140000 audit[3647]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd1991808 a2=94 a3=5 items=0 ppid=3478 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.140000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:36:19.140000 audit: BPF prog-id=17 op=UNLOAD Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit[3647]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffd1991910 a2=50 a3=0 items=0 ppid=3478 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.140000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit[3647]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffd1991a58 a2=4 a3=3 items=0 ppid=3478 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.140000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.140000 audit[3647]: AVC avc: denied { confidentiality } for pid=3647 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 10 00:36:19.140000 audit[3647]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd1991a38 a2=94 a3=6 items=0 ppid=3478 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.140000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:36:19.141000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.141000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.141000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.141000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.141000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.141000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.141000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.141000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.141000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.141000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.141000 audit[3647]: AVC avc: denied { confidentiality } for pid=3647 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 10 00:36:19.141000 audit[3647]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd1991208 a2=94 a3=83 items=0 ppid=3478 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.141000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:36:19.141000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.141000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.141000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.141000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.141000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.141000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.141000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.141000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.141000 audit[3647]: AVC avc: denied { perfmon } for pid=3647 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.141000 audit[3647]: AVC avc: denied { bpf } for pid=3647 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.141000 audit[3647]: AVC avc: denied { confidentiality } for pid=3647 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 10 00:36:19.141000 audit[3647]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd1991208 a2=94 a3=83 items=0 ppid=3478 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.141000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 00:36:19.160000 audit[3650]: AVC avc: denied { bpf } for pid=3650 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.160000 audit[3650]: AVC avc: denied { bpf } for pid=3650 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.160000 audit[3650]: AVC avc: denied { perfmon } for pid=3650 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.160000 audit[3650]: AVC avc: denied { perfmon } for pid=3650 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.160000 audit[3650]: AVC avc: denied { perfmon } for pid=3650 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.160000 audit[3650]: AVC avc: denied { perfmon } for pid=3650 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.160000 audit[3650]: AVC avc: denied { perfmon } for pid=3650 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.160000 audit[3650]: AVC avc: denied { bpf } for pid=3650 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.160000 audit[3650]: AVC avc: denied { bpf } for pid=3650 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.160000 audit: BPF prog-id=18 op=LOAD Jul 10 00:36:19.160000 audit[3650]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe9abda08 a2=98 a3=ffffe9abd9f8 items=0 ppid=3478 pid=3650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.160000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 10 00:36:19.160000 audit: BPF prog-id=18 op=UNLOAD Jul 10 00:36:19.160000 audit[3650]: AVC avc: denied { bpf } for pid=3650 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.160000 audit[3650]: AVC avc: denied { bpf } for pid=3650 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.160000 audit[3650]: AVC avc: denied { perfmon } for pid=3650 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.160000 audit[3650]: AVC avc: denied { perfmon } for pid=3650 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.160000 audit[3650]: AVC avc: denied { perfmon } for pid=3650 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.160000 audit[3650]: AVC avc: denied { perfmon } for pid=3650 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.160000 audit[3650]: AVC avc: denied { perfmon } for pid=3650 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.160000 audit[3650]: AVC avc: denied { bpf } for pid=3650 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.160000 audit[3650]: AVC avc: denied { bpf } for pid=3650 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.160000 audit: BPF prog-id=19 op=LOAD Jul 10 00:36:19.160000 audit[3650]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe9abd8b8 a2=74 a3=95 items=0 ppid=3478 pid=3650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.160000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 10 00:36:19.160000 audit: BPF prog-id=19 op=UNLOAD Jul 10 00:36:19.160000 audit[3650]: AVC avc: denied { bpf } for pid=3650 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.160000 audit[3650]: AVC avc: denied { bpf } for pid=3650 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.160000 audit[3650]: AVC avc: denied { perfmon } for pid=3650 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.160000 audit[3650]: AVC avc: denied { perfmon } for pid=3650 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.160000 audit[3650]: AVC avc: denied { perfmon } for pid=3650 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.160000 audit[3650]: AVC avc: denied { perfmon } for pid=3650 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.160000 audit[3650]: AVC avc: denied { perfmon } for pid=3650 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.160000 audit[3650]: AVC avc: denied { bpf } for pid=3650 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.160000 audit[3650]: AVC avc: denied { bpf } for pid=3650 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.160000 audit: BPF prog-id=20 op=LOAD Jul 10 00:36:19.160000 audit[3650]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe9abd8e8 a2=40 a3=ffffe9abd918 items=0 ppid=3478 pid=3650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.160000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 10 00:36:19.160000 audit: BPF prog-id=20 op=UNLOAD Jul 10 00:36:19.227255 systemd-networkd[1098]: vxlan.calico: Link UP Jul 10 00:36:19.227261 systemd-networkd[1098]: vxlan.calico: Gained carrier Jul 10 00:36:19.241000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.241000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.241000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.241000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.241000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.241000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.241000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.241000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.241000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.241000 audit: BPF prog-id=21 op=LOAD Jul 10 00:36:19.241000 audit[3674]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc90956e8 a2=98 a3=ffffc90956d8 items=0 ppid=3478 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.241000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:36:19.241000 audit: BPF prog-id=21 op=UNLOAD Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit: BPF prog-id=22 op=LOAD Jul 10 00:36:19.242000 audit[3674]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc90953c8 a2=74 a3=95 items=0 ppid=3478 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.242000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:36:19.242000 audit: BPF prog-id=22 op=UNLOAD Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit: BPF prog-id=23 op=LOAD Jul 10 00:36:19.242000 audit[3674]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc9095428 a2=94 a3=2 items=0 ppid=3478 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.242000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:36:19.242000 audit: BPF prog-id=23 op=UNLOAD Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc9095458 a2=28 a3=ffffc9095588 items=0 ppid=3478 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.242000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc9095488 a2=28 a3=ffffc90955b8 items=0 ppid=3478 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.242000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc9095338 a2=28 a3=ffffc9095468 items=0 ppid=3478 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.242000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc90954a8 a2=28 a3=ffffc90955d8 items=0 ppid=3478 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.242000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc9095488 a2=28 a3=ffffc90955b8 items=0 ppid=3478 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.242000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc9095478 a2=28 a3=ffffc90955a8 items=0 ppid=3478 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.242000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc90954a8 a2=28 a3=ffffc90955d8 items=0 ppid=3478 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.242000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc9095488 a2=28 a3=ffffc90955b8 items=0 ppid=3478 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.242000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc90954a8 a2=28 a3=ffffc90955d8 items=0 ppid=3478 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.242000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc9095478 a2=28 a3=ffffc90955a8 items=0 ppid=3478 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.242000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc90954f8 a2=28 a3=ffffc9095638 items=0 ppid=3478 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.242000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.242000 audit: BPF prog-id=24 op=LOAD Jul 10 00:36:19.242000 audit[3674]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc9095318 a2=40 a3=ffffc9095348 items=0 ppid=3478 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.242000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:36:19.242000 audit: BPF prog-id=24 op=UNLOAD Jul 10 00:36:19.243000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.243000 audit[3674]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=0 a1=ffffc9095340 a2=50 a3=0 items=0 ppid=3478 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.243000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:36:19.243000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.243000 audit[3674]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=0 a1=ffffc9095340 a2=50 a3=0 items=0 ppid=3478 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.243000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:36:19.243000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.243000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.243000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.243000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.243000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.243000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.243000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.243000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.243000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.243000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.243000 audit: BPF prog-id=25 op=LOAD Jul 10 00:36:19.243000 audit[3674]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc9094aa8 a2=94 a3=2 items=0 ppid=3478 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.243000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:36:19.243000 audit: BPF prog-id=25 op=UNLOAD Jul 10 00:36:19.243000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.243000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.243000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.243000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.243000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.243000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.243000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.243000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.243000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.243000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.243000 audit: BPF prog-id=26 op=LOAD Jul 10 00:36:19.243000 audit[3674]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc9094c38 a2=94 a3=30 items=0 ppid=3478 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.243000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 00:36:19.245000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.245000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.245000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.245000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.245000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.245000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.245000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.245000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.245000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.245000 audit: BPF prog-id=27 op=LOAD Jul 10 00:36:19.245000 audit[3678]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff49c69e8 a2=98 a3=fffff49c69d8 items=0 ppid=3478 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.245000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:36:19.245000 audit: BPF prog-id=27 op=UNLOAD Jul 10 00:36:19.245000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.245000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.245000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.245000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.245000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.245000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.245000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.245000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.245000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.245000 audit: BPF prog-id=28 op=LOAD Jul 10 00:36:19.245000 audit[3678]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff49c6678 a2=74 a3=95 items=0 ppid=3478 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.245000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:36:19.245000 audit: BPF prog-id=28 op=UNLOAD Jul 10 00:36:19.245000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.245000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.245000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.245000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.245000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.245000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.245000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.245000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.245000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.245000 audit: BPF prog-id=29 op=LOAD Jul 10 00:36:19.245000 audit[3678]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff49c66d8 a2=94 a3=2 items=0 ppid=3478 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.245000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:36:19.246000 audit: BPF prog-id=29 op=UNLOAD Jul 10 00:36:19.336000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.336000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.336000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.336000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.336000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.336000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.336000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.336000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.336000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.336000 audit: BPF prog-id=30 op=LOAD Jul 10 00:36:19.336000 audit[3678]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff49c6698 a2=40 a3=fffff49c66c8 items=0 ppid=3478 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.336000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:36:19.337000 audit: BPF prog-id=30 op=UNLOAD Jul 10 00:36:19.337000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.337000 audit[3678]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=fffff49c67b0 a2=50 a3=0 items=0 ppid=3478 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.337000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit[3678]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff49c6708 a2=28 a3=fffff49c6838 items=0 ppid=3478 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.346000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit[3678]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff49c6738 a2=28 a3=fffff49c6868 items=0 ppid=3478 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.346000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit[3678]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff49c65e8 a2=28 a3=fffff49c6718 items=0 ppid=3478 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.346000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit[3678]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff49c6758 a2=28 a3=fffff49c6888 items=0 ppid=3478 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.346000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit[3678]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff49c6738 a2=28 a3=fffff49c6868 items=0 ppid=3478 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.346000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit[3678]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff49c6728 a2=28 a3=fffff49c6858 items=0 ppid=3478 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.346000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit[3678]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff49c6758 a2=28 a3=fffff49c6888 items=0 ppid=3478 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.346000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit[3678]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff49c6738 a2=28 a3=fffff49c6868 items=0 ppid=3478 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.346000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit[3678]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff49c6758 a2=28 a3=fffff49c6888 items=0 ppid=3478 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.346000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit[3678]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff49c6728 a2=28 a3=fffff49c6858 items=0 ppid=3478 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.346000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit[3678]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff49c67a8 a2=28 a3=fffff49c68e8 items=0 ppid=3478 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.346000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit[3678]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=fffff49c64e0 a2=50 a3=0 items=0 ppid=3478 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.346000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit: BPF prog-id=31 op=LOAD Jul 10 00:36:19.346000 audit[3678]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff49c64e8 a2=94 a3=5 items=0 ppid=3478 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.346000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:36:19.346000 audit: BPF prog-id=31 op=UNLOAD Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit[3678]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=fffff49c65f0 a2=50 a3=0 items=0 ppid=3478 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.346000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit[3678]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=fffff49c6738 a2=4 a3=3 items=0 ppid=3478 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.346000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.346000 audit[3678]: AVC avc: denied { confidentiality } for pid=3678 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 10 00:36:19.346000 audit[3678]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffff49c6718 a2=94 a3=6 items=0 ppid=3478 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.346000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:36:19.347000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.347000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.347000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.347000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.347000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.347000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.347000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.347000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.347000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.347000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.347000 audit[3678]: AVC avc: denied { confidentiality } for pid=3678 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 10 00:36:19.347000 audit[3678]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffff49c5ee8 a2=94 a3=83 items=0 ppid=3478 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.347000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:36:19.347000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.347000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.347000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.347000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.347000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.347000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.347000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.347000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.347000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.347000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.347000 audit[3678]: AVC avc: denied { confidentiality } for pid=3678 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 10 00:36:19.347000 audit[3678]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffff49c5ee8 a2=94 a3=83 items=0 ppid=3478 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.347000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:36:19.347000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.347000 audit[3678]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffff49c7928 a2=10 a3=fffff49c7a18 items=0 ppid=3478 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.347000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:36:19.347000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.347000 audit[3678]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffff49c77e8 a2=10 a3=fffff49c78d8 items=0 ppid=3478 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.347000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:36:19.347000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.347000 audit[3678]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffff49c7758 a2=10 a3=fffff49c78d8 items=0 ppid=3478 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.347000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:36:19.347000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 00:36:19.347000 audit[3678]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffff49c7758 a2=10 a3=fffff49c78d8 items=0 ppid=3478 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.347000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 00:36:19.354000 audit: BPF prog-id=26 op=UNLOAD Jul 10 00:36:19.400000 audit[3708]: NETFILTER_CFG table=mangle:101 family=2 entries=16 op=nft_register_chain pid=3708 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 00:36:19.400000 audit[3708]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffe0864b70 a2=0 a3=ffff95c2afa8 items=0 ppid=3478 pid=3708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.400000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 00:36:19.409000 audit[3709]: NETFILTER_CFG table=nat:102 family=2 entries=15 op=nft_register_chain pid=3709 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 00:36:19.409000 audit[3709]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffedeba980 a2=0 a3=ffff8b76afa8 items=0 ppid=3478 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.409000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 00:36:19.412000 audit[3707]: NETFILTER_CFG table=raw:103 family=2 entries=21 op=nft_register_chain pid=3707 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 00:36:19.412000 audit[3707]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffe2dfeda0 a2=0 a3=ffffa32eafa8 items=0 ppid=3478 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.412000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 00:36:19.413000 audit[3712]: NETFILTER_CFG table=filter:104 family=2 entries=94 op=nft_register_chain pid=3712 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 00:36:19.413000 audit[3712]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffcd713770 a2=0 a3=ffffad8a8fa8 items=0 ppid=3478 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:19.413000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 00:36:20.125926 env[1315]: time="2025-07-10T00:36:20.125856069Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:20.129201 env[1315]: time="2025-07-10T00:36:20.129099645Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:20.130635 env[1315]: time="2025-07-10T00:36:20.130601434Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:20.132848 env[1315]: time="2025-07-10T00:36:20.132808178Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:20.133772 env[1315]: time="2025-07-10T00:36:20.133736651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 10 00:36:20.139386 env[1315]: time="2025-07-10T00:36:20.139329530Z" level=info msg="CreateContainer within sandbox \"8f60d0e65f7c8e6c0903b12252769c5e5f2ac0f18d68e72fc6495ca2d9958735\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 10 00:36:20.155224 env[1315]: time="2025-07-10T00:36:20.155175614Z" level=info msg="CreateContainer within sandbox \"8f60d0e65f7c8e6c0903b12252769c5e5f2ac0f18d68e72fc6495ca2d9958735\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"81348be59661326f8456885d2e397e30a853a5480dec4c1ee32c1ad3f7be6f49\"" Jul 10 00:36:20.158025 env[1315]: time="2025-07-10T00:36:20.158000553Z" level=info msg="StartContainer for \"81348be59661326f8456885d2e397e30a853a5480dec4c1ee32c1ad3f7be6f49\"" Jul 10 00:36:20.196603 systemd-networkd[1098]: califfcfb36d15a: Gained IPv6LL Jul 10 00:36:20.210250 kubelet[2094]: I0710 00:36:20.209986 2094 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a81984fa-095f-48f3-8dad-bb2f184b8979" path="/var/lib/kubelet/pods/a81984fa-095f-48f3-8dad-bb2f184b8979/volumes" Jul 10 00:36:20.276996 env[1315]: time="2025-07-10T00:36:20.276944960Z" level=info msg="StartContainer for \"81348be59661326f8456885d2e397e30a853a5480dec4c1ee32c1ad3f7be6f49\" returns successfully" Jul 10 00:36:20.278402 env[1315]: time="2025-07-10T00:36:20.278346830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 10 00:36:20.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.85:22-10.0.0.1:54588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:20.442597 systemd[1]: Started sshd@7-10.0.0.85:22-10.0.0.1:54588.service. Jul 10 00:36:20.488000 audit[3766]: USER_ACCT pid=3766 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:20.489595 sshd[3766]: Accepted publickey for core from 10.0.0.1 port 54588 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:36:20.490000 audit[3766]: CRED_ACQ pid=3766 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:20.490000 audit[3766]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd29e6d30 a2=3 a3=1 items=0 ppid=1 pid=3766 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:20.490000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:36:20.491172 sshd[3766]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:20.495908 systemd[1]: Started session-8.scope. Jul 10 00:36:20.496255 systemd-logind[1302]: New session 8 of user core. Jul 10 00:36:20.499000 audit[3766]: USER_START pid=3766 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:20.500000 audit[3769]: CRED_ACQ pid=3769 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:20.661707 sshd[3766]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:20.662000 audit[3766]: USER_END pid=3766 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:20.662000 audit[3766]: CRED_DISP pid=3766 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:20.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.85:22-10.0.0.1:54588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:20.664109 systemd[1]: sshd@7-10.0.0.85:22-10.0.0.1:54588.service: Deactivated successfully. Jul 10 00:36:20.665382 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 00:36:20.665878 systemd-logind[1302]: Session 8 logged out. Waiting for processes to exit. Jul 10 00:36:20.666885 systemd-logind[1302]: Removed session 8. Jul 10 00:36:20.900034 systemd-networkd[1098]: vxlan.calico: Gained IPv6LL Jul 10 00:36:22.009000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4132079999.mount: Deactivated successfully. Jul 10 00:36:22.041637 env[1315]: time="2025-07-10T00:36:22.041579764Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:22.044614 env[1315]: time="2025-07-10T00:36:22.044576103Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:22.046702 env[1315]: time="2025-07-10T00:36:22.046675208Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:22.048211 env[1315]: time="2025-07-10T00:36:22.048169398Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:22.048703 env[1315]: time="2025-07-10T00:36:22.048678634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 10 00:36:22.050910 env[1315]: time="2025-07-10T00:36:22.050880179Z" level=info msg="CreateContainer within sandbox \"8f60d0e65f7c8e6c0903b12252769c5e5f2ac0f18d68e72fc6495ca2d9958735\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 10 00:36:22.061394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1765904471.mount: Deactivated successfully. Jul 10 00:36:22.067455 env[1315]: time="2025-07-10T00:36:22.067378905Z" level=info msg="CreateContainer within sandbox \"8f60d0e65f7c8e6c0903b12252769c5e5f2ac0f18d68e72fc6495ca2d9958735\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"9a845fb75d11dacbf93a80870412bdcf799aad7e14c3e02da4acca36e6c81eaf\"" Jul 10 00:36:22.068030 env[1315]: time="2025-07-10T00:36:22.067997021Z" level=info msg="StartContainer for \"9a845fb75d11dacbf93a80870412bdcf799aad7e14c3e02da4acca36e6c81eaf\"" Jul 10 00:36:22.148898 env[1315]: time="2025-07-10T00:36:22.148844861Z" level=info msg="StartContainer for \"9a845fb75d11dacbf93a80870412bdcf799aad7e14c3e02da4acca36e6c81eaf\" returns successfully" Jul 10 00:36:22.330478 kubelet[2094]: I0710 00:36:22.330026 2094 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5688bcc687-8k7vk" podStartSLOduration=1.3108870320000001 podStartE2EDuration="4.330010328s" podCreationTimestamp="2025-07-10 00:36:18 +0000 UTC" firstStartedPulling="2025-07-10 00:36:19.030602571 +0000 UTC m=+36.927181632" lastFinishedPulling="2025-07-10 00:36:22.049725867 +0000 UTC m=+39.946304928" observedRunningTime="2025-07-10 00:36:22.329554291 +0000 UTC m=+40.226133352" watchObservedRunningTime="2025-07-10 00:36:22.330010328 +0000 UTC m=+40.226589389" Jul 10 00:36:22.350000 audit[3821]: NETFILTER_CFG table=filter:105 family=2 entries=19 op=nft_register_rule pid=3821 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:22.350000 audit[3821]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff6d10fb0 a2=0 a3=1 items=0 ppid=2205 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:22.350000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:22.364000 audit[3821]: NETFILTER_CFG table=nat:106 family=2 entries=21 op=nft_register_chain pid=3821 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:22.364000 audit[3821]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7044 a0=3 a1=fffff6d10fb0 a2=0 a3=1 items=0 ppid=2205 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:22.364000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:23.203816 env[1315]: time="2025-07-10T00:36:23.203775960Z" level=info msg="StopPodSandbox for \"c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77\"" Jul 10 00:36:23.204196 env[1315]: time="2025-07-10T00:36:23.203804440Z" level=info msg="StopPodSandbox for \"4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9\"" Jul 10 00:36:23.310456 env[1315]: 2025-07-10 00:36:23.261 [INFO][3845] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" Jul 10 00:36:23.310456 env[1315]: 2025-07-10 00:36:23.261 [INFO][3845] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" iface="eth0" netns="/var/run/netns/cni-09e9fec4-f77f-394d-1912-e21f2f7bfea1" Jul 10 00:36:23.310456 env[1315]: 2025-07-10 00:36:23.261 [INFO][3845] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" iface="eth0" netns="/var/run/netns/cni-09e9fec4-f77f-394d-1912-e21f2f7bfea1" Jul 10 00:36:23.310456 env[1315]: 2025-07-10 00:36:23.261 [INFO][3845] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" iface="eth0" netns="/var/run/netns/cni-09e9fec4-f77f-394d-1912-e21f2f7bfea1" Jul 10 00:36:23.310456 env[1315]: 2025-07-10 00:36:23.261 [INFO][3845] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" Jul 10 00:36:23.310456 env[1315]: 2025-07-10 00:36:23.261 [INFO][3845] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" Jul 10 00:36:23.310456 env[1315]: 2025-07-10 00:36:23.296 [INFO][3859] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" HandleID="k8s-pod-network.4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" Workload="localhost-k8s-goldmane--58fd7646b9--ltbnn-eth0" Jul 10 00:36:23.310456 env[1315]: 2025-07-10 00:36:23.296 [INFO][3859] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:36:23.310456 env[1315]: 2025-07-10 00:36:23.296 [INFO][3859] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:36:23.310456 env[1315]: 2025-07-10 00:36:23.304 [WARNING][3859] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" HandleID="k8s-pod-network.4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" Workload="localhost-k8s-goldmane--58fd7646b9--ltbnn-eth0" Jul 10 00:36:23.310456 env[1315]: 2025-07-10 00:36:23.305 [INFO][3859] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" HandleID="k8s-pod-network.4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" Workload="localhost-k8s-goldmane--58fd7646b9--ltbnn-eth0" Jul 10 00:36:23.310456 env[1315]: 2025-07-10 00:36:23.307 [INFO][3859] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:36:23.310456 env[1315]: 2025-07-10 00:36:23.308 [INFO][3845] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" Jul 10 00:36:23.312895 systemd[1]: run-netns-cni\x2d09e9fec4\x2df77f\x2d394d\x2d1912\x2de21f2f7bfea1.mount: Deactivated successfully. Jul 10 00:36:23.314843 env[1315]: time="2025-07-10T00:36:23.313668420Z" level=info msg="TearDown network for sandbox \"4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9\" successfully" Jul 10 00:36:23.314843 env[1315]: time="2025-07-10T00:36:23.313708980Z" level=info msg="StopPodSandbox for \"4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9\" returns successfully" Jul 10 00:36:23.315054 env[1315]: time="2025-07-10T00:36:23.315022331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-ltbnn,Uid:fd0d6de4-b13e-4994-9b29-6ae2a2c1a419,Namespace:calico-system,Attempt:1,}" Jul 10 00:36:23.340749 env[1315]: 2025-07-10 00:36:23.277 [INFO][3846] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" Jul 10 00:36:23.340749 env[1315]: 2025-07-10 00:36:23.277 [INFO][3846] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" iface="eth0" netns="/var/run/netns/cni-c5f22983-94f1-ace5-bcb2-c1d6c0adea24" Jul 10 00:36:23.340749 env[1315]: 2025-07-10 00:36:23.277 [INFO][3846] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" iface="eth0" netns="/var/run/netns/cni-c5f22983-94f1-ace5-bcb2-c1d6c0adea24" Jul 10 00:36:23.340749 env[1315]: 2025-07-10 00:36:23.278 [INFO][3846] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" iface="eth0" netns="/var/run/netns/cni-c5f22983-94f1-ace5-bcb2-c1d6c0adea24" Jul 10 00:36:23.340749 env[1315]: 2025-07-10 00:36:23.278 [INFO][3846] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" Jul 10 00:36:23.340749 env[1315]: 2025-07-10 00:36:23.278 [INFO][3846] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" Jul 10 00:36:23.340749 env[1315]: 2025-07-10 00:36:23.318 [INFO][3867] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" HandleID="k8s-pod-network.c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" Workload="localhost-k8s-coredns--7c65d6cfc9--2plgf-eth0" Jul 10 00:36:23.340749 env[1315]: 2025-07-10 00:36:23.319 [INFO][3867] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:36:23.340749 env[1315]: 2025-07-10 00:36:23.319 [INFO][3867] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:36:23.340749 env[1315]: 2025-07-10 00:36:23.332 [WARNING][3867] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" HandleID="k8s-pod-network.c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" Workload="localhost-k8s-coredns--7c65d6cfc9--2plgf-eth0" Jul 10 00:36:23.340749 env[1315]: 2025-07-10 00:36:23.332 [INFO][3867] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" HandleID="k8s-pod-network.c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" Workload="localhost-k8s-coredns--7c65d6cfc9--2plgf-eth0" Jul 10 00:36:23.340749 env[1315]: 2025-07-10 00:36:23.336 [INFO][3867] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:36:23.340749 env[1315]: 2025-07-10 00:36:23.339 [INFO][3846] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" Jul 10 00:36:23.341405 env[1315]: time="2025-07-10T00:36:23.341369114Z" level=info msg="TearDown network for sandbox \"c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77\" successfully" Jul 10 00:36:23.341521 env[1315]: time="2025-07-10T00:36:23.341501673Z" level=info msg="StopPodSandbox for \"c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77\" returns successfully" Jul 10 00:36:23.341888 kubelet[2094]: E0710 00:36:23.341845 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:23.344462 env[1315]: time="2025-07-10T00:36:23.344395213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2plgf,Uid:7ca43f8d-e9b6-493a-b482-3d2dc7232c75,Namespace:kube-system,Attempt:1,}" Jul 10 00:36:23.345972 systemd[1]: run-netns-cni\x2dc5f22983\x2d94f1\x2dace5\x2dbcb2\x2dc1d6c0adea24.mount: Deactivated successfully. Jul 10 00:36:23.453471 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6c37deeddec: link becomes ready Jul 10 00:36:23.448982 systemd-networkd[1098]: cali6c37deeddec: Link UP Jul 10 00:36:23.449163 systemd-networkd[1098]: cali6c37deeddec: Gained carrier Jul 10 00:36:23.463411 env[1315]: 2025-07-10 00:36:23.368 [INFO][3876] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--ltbnn-eth0 goldmane-58fd7646b9- calico-system fd0d6de4-b13e-4994-9b29-6ae2a2c1a419 1013 0 2025-07-10 00:36:02 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-ltbnn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6c37deeddec [] [] }} ContainerID="6d23b2e04a267d411b87a27ff984e2044b97fcdabb4994cb58b7f26ba1f33fad" Namespace="calico-system" Pod="goldmane-58fd7646b9-ltbnn" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--ltbnn-" Jul 10 00:36:23.463411 env[1315]: 2025-07-10 00:36:23.368 [INFO][3876] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d23b2e04a267d411b87a27ff984e2044b97fcdabb4994cb58b7f26ba1f33fad" Namespace="calico-system" Pod="goldmane-58fd7646b9-ltbnn" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--ltbnn-eth0" Jul 10 00:36:23.463411 env[1315]: 2025-07-10 00:36:23.401 [INFO][3903] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d23b2e04a267d411b87a27ff984e2044b97fcdabb4994cb58b7f26ba1f33fad" HandleID="k8s-pod-network.6d23b2e04a267d411b87a27ff984e2044b97fcdabb4994cb58b7f26ba1f33fad" Workload="localhost-k8s-goldmane--58fd7646b9--ltbnn-eth0" Jul 10 00:36:23.463411 env[1315]: 2025-07-10 00:36:23.401 [INFO][3903] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6d23b2e04a267d411b87a27ff984e2044b97fcdabb4994cb58b7f26ba1f33fad" HandleID="k8s-pod-network.6d23b2e04a267d411b87a27ff984e2044b97fcdabb4994cb58b7f26ba1f33fad" Workload="localhost-k8s-goldmane--58fd7646b9--ltbnn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3010), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-ltbnn", "timestamp":"2025-07-10 00:36:23.401569469 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:36:23.463411 env[1315]: 2025-07-10 00:36:23.401 [INFO][3903] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:36:23.463411 env[1315]: 2025-07-10 00:36:23.401 [INFO][3903] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:36:23.463411 env[1315]: 2025-07-10 00:36:23.401 [INFO][3903] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:36:23.463411 env[1315]: 2025-07-10 00:36:23.413 [INFO][3903] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6d23b2e04a267d411b87a27ff984e2044b97fcdabb4994cb58b7f26ba1f33fad" host="localhost" Jul 10 00:36:23.463411 env[1315]: 2025-07-10 00:36:23.418 [INFO][3903] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:36:23.463411 env[1315]: 2025-07-10 00:36:23.424 [INFO][3903] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:36:23.463411 env[1315]: 2025-07-10 00:36:23.426 [INFO][3903] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:36:23.463411 env[1315]: 2025-07-10 00:36:23.429 [INFO][3903] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:36:23.463411 env[1315]: 2025-07-10 00:36:23.430 [INFO][3903] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6d23b2e04a267d411b87a27ff984e2044b97fcdabb4994cb58b7f26ba1f33fad" host="localhost" Jul 10 00:36:23.463411 env[1315]: 2025-07-10 00:36:23.432 [INFO][3903] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6d23b2e04a267d411b87a27ff984e2044b97fcdabb4994cb58b7f26ba1f33fad Jul 10 00:36:23.463411 env[1315]: 2025-07-10 00:36:23.437 [INFO][3903] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6d23b2e04a267d411b87a27ff984e2044b97fcdabb4994cb58b7f26ba1f33fad" host="localhost" Jul 10 00:36:23.463411 env[1315]: 2025-07-10 00:36:23.442 [INFO][3903] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.6d23b2e04a267d411b87a27ff984e2044b97fcdabb4994cb58b7f26ba1f33fad" host="localhost" Jul 10 00:36:23.463411 env[1315]: 2025-07-10 00:36:23.442 [INFO][3903] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.6d23b2e04a267d411b87a27ff984e2044b97fcdabb4994cb58b7f26ba1f33fad" host="localhost" Jul 10 00:36:23.463411 env[1315]: 2025-07-10 00:36:23.442 [INFO][3903] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:36:23.463411 env[1315]: 2025-07-10 00:36:23.442 [INFO][3903] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="6d23b2e04a267d411b87a27ff984e2044b97fcdabb4994cb58b7f26ba1f33fad" HandleID="k8s-pod-network.6d23b2e04a267d411b87a27ff984e2044b97fcdabb4994cb58b7f26ba1f33fad" Workload="localhost-k8s-goldmane--58fd7646b9--ltbnn-eth0" Jul 10 00:36:23.464022 env[1315]: 2025-07-10 00:36:23.444 [INFO][3876] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d23b2e04a267d411b87a27ff984e2044b97fcdabb4994cb58b7f26ba1f33fad" Namespace="calico-system" Pod="goldmane-58fd7646b9-ltbnn" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--ltbnn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--ltbnn-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"fd0d6de4-b13e-4994-9b29-6ae2a2c1a419", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 36, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-ltbnn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6c37deeddec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:36:23.464022 env[1315]: 2025-07-10 00:36:23.444 [INFO][3876] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="6d23b2e04a267d411b87a27ff984e2044b97fcdabb4994cb58b7f26ba1f33fad" Namespace="calico-system" Pod="goldmane-58fd7646b9-ltbnn" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--ltbnn-eth0" Jul 10 00:36:23.464022 env[1315]: 2025-07-10 00:36:23.444 [INFO][3876] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6c37deeddec ContainerID="6d23b2e04a267d411b87a27ff984e2044b97fcdabb4994cb58b7f26ba1f33fad" Namespace="calico-system" Pod="goldmane-58fd7646b9-ltbnn" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--ltbnn-eth0" Jul 10 00:36:23.464022 env[1315]: 2025-07-10 00:36:23.445 [INFO][3876] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d23b2e04a267d411b87a27ff984e2044b97fcdabb4994cb58b7f26ba1f33fad" Namespace="calico-system" Pod="goldmane-58fd7646b9-ltbnn" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--ltbnn-eth0" Jul 10 00:36:23.464022 env[1315]: 2025-07-10 00:36:23.446 [INFO][3876] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d23b2e04a267d411b87a27ff984e2044b97fcdabb4994cb58b7f26ba1f33fad" Namespace="calico-system" Pod="goldmane-58fd7646b9-ltbnn" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--ltbnn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--ltbnn-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"fd0d6de4-b13e-4994-9b29-6ae2a2c1a419", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 36, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6d23b2e04a267d411b87a27ff984e2044b97fcdabb4994cb58b7f26ba1f33fad", Pod:"goldmane-58fd7646b9-ltbnn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6c37deeddec", MAC:"32:2b:de:22:2c:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:36:23.464022 env[1315]: 2025-07-10 00:36:23.459 [INFO][3876] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d23b2e04a267d411b87a27ff984e2044b97fcdabb4994cb58b7f26ba1f33fad" Namespace="calico-system" Pod="goldmane-58fd7646b9-ltbnn" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--ltbnn-eth0" Jul 10 00:36:23.474000 audit[3932]: NETFILTER_CFG table=filter:107 family=2 entries=44 op=nft_register_chain pid=3932 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 00:36:23.474000 audit[3932]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25180 a0=3 a1=fffff7f83ee0 a2=0 a3=ffffa5229fa8 items=0 ppid=3478 pid=3932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:23.474000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 00:36:23.479398 env[1315]: time="2025-07-10T00:36:23.479323145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:36:23.479398 env[1315]: time="2025-07-10T00:36:23.479370145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:36:23.479398 env[1315]: time="2025-07-10T00:36:23.479380545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:36:23.479563 env[1315]: time="2025-07-10T00:36:23.479532824Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d23b2e04a267d411b87a27ff984e2044b97fcdabb4994cb58b7f26ba1f33fad pid=3940 runtime=io.containerd.runc.v2 Jul 10 00:36:23.522295 systemd-resolved[1236]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:36:23.546034 env[1315]: time="2025-07-10T00:36:23.545958937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-ltbnn,Uid:fd0d6de4-b13e-4994-9b29-6ae2a2c1a419,Namespace:calico-system,Attempt:1,} returns sandbox id \"6d23b2e04a267d411b87a27ff984e2044b97fcdabb4994cb58b7f26ba1f33fad\"" Jul 10 00:36:23.549174 env[1315]: time="2025-07-10T00:36:23.549139835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 10 00:36:23.559186 systemd-networkd[1098]: cali6516e0cc8a9: Link UP Jul 10 00:36:23.560453 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6516e0cc8a9: link becomes ready Jul 10 00:36:23.562630 systemd-networkd[1098]: cali6516e0cc8a9: Gained carrier Jul 10 00:36:23.613314 env[1315]: 2025-07-10 00:36:23.395 [INFO][3891] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--2plgf-eth0 coredns-7c65d6cfc9- kube-system 7ca43f8d-e9b6-493a-b482-3d2dc7232c75 1014 0 2025-07-10 00:35:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-2plgf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6516e0cc8a9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6571ea4c14588882565bc031474feccb76a8f356384e912617b88a4107b72646" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2plgf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2plgf-" Jul 10 00:36:23.613314 env[1315]: 2025-07-10 00:36:23.395 [INFO][3891] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6571ea4c14588882565bc031474feccb76a8f356384e912617b88a4107b72646" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2plgf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2plgf-eth0" Jul 10 00:36:23.613314 env[1315]: 2025-07-10 00:36:23.435 [INFO][3912] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6571ea4c14588882565bc031474feccb76a8f356384e912617b88a4107b72646" HandleID="k8s-pod-network.6571ea4c14588882565bc031474feccb76a8f356384e912617b88a4107b72646" Workload="localhost-k8s-coredns--7c65d6cfc9--2plgf-eth0" Jul 10 00:36:23.613314 env[1315]: 2025-07-10 00:36:23.436 [INFO][3912] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6571ea4c14588882565bc031474feccb76a8f356384e912617b88a4107b72646" HandleID="k8s-pod-network.6571ea4c14588882565bc031474feccb76a8f356384e912617b88a4107b72646" Workload="localhost-k8s-coredns--7c65d6cfc9--2plgf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d730), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-2plgf", "timestamp":"2025-07-10 00:36:23.435962757 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:36:23.613314 env[1315]: 2025-07-10 00:36:23.437 [INFO][3912] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:36:23.613314 env[1315]: 2025-07-10 00:36:23.442 [INFO][3912] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:36:23.613314 env[1315]: 2025-07-10 00:36:23.442 [INFO][3912] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:36:23.613314 env[1315]: 2025-07-10 00:36:23.515 [INFO][3912] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6571ea4c14588882565bc031474feccb76a8f356384e912617b88a4107b72646" host="localhost" Jul 10 00:36:23.613314 env[1315]: 2025-07-10 00:36:23.526 [INFO][3912] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:36:23.613314 env[1315]: 2025-07-10 00:36:23.536 [INFO][3912] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:36:23.613314 env[1315]: 2025-07-10 00:36:23.538 [INFO][3912] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:36:23.613314 env[1315]: 2025-07-10 00:36:23.541 [INFO][3912] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:36:23.613314 env[1315]: 2025-07-10 00:36:23.541 [INFO][3912] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6571ea4c14588882565bc031474feccb76a8f356384e912617b88a4107b72646" host="localhost" Jul 10 00:36:23.613314 env[1315]: 2025-07-10 00:36:23.542 [INFO][3912] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6571ea4c14588882565bc031474feccb76a8f356384e912617b88a4107b72646 Jul 10 00:36:23.613314 env[1315]: 2025-07-10 00:36:23.546 [INFO][3912] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6571ea4c14588882565bc031474feccb76a8f356384e912617b88a4107b72646" host="localhost" Jul 10 00:36:23.613314 env[1315]: 2025-07-10 00:36:23.555 [INFO][3912] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.6571ea4c14588882565bc031474feccb76a8f356384e912617b88a4107b72646" host="localhost" Jul 10 00:36:23.613314 env[1315]: 2025-07-10 00:36:23.555 [INFO][3912] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.6571ea4c14588882565bc031474feccb76a8f356384e912617b88a4107b72646" host="localhost" Jul 10 00:36:23.613314 env[1315]: 2025-07-10 00:36:23.555 [INFO][3912] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:36:23.613314 env[1315]: 2025-07-10 00:36:23.555 [INFO][3912] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="6571ea4c14588882565bc031474feccb76a8f356384e912617b88a4107b72646" HandleID="k8s-pod-network.6571ea4c14588882565bc031474feccb76a8f356384e912617b88a4107b72646" Workload="localhost-k8s-coredns--7c65d6cfc9--2plgf-eth0" Jul 10 00:36:23.613977 env[1315]: 2025-07-10 00:36:23.557 [INFO][3891] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6571ea4c14588882565bc031474feccb76a8f356384e912617b88a4107b72646" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2plgf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2plgf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--2plgf-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7ca43f8d-e9b6-493a-b482-3d2dc7232c75", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 35, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-2plgf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6516e0cc8a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:36:23.613977 env[1315]: 2025-07-10 00:36:23.557 [INFO][3891] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="6571ea4c14588882565bc031474feccb76a8f356384e912617b88a4107b72646" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2plgf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2plgf-eth0" Jul 10 00:36:23.613977 env[1315]: 2025-07-10 00:36:23.557 [INFO][3891] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6516e0cc8a9 ContainerID="6571ea4c14588882565bc031474feccb76a8f356384e912617b88a4107b72646" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2plgf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2plgf-eth0" Jul 10 00:36:23.613977 env[1315]: 2025-07-10 00:36:23.560 [INFO][3891] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6571ea4c14588882565bc031474feccb76a8f356384e912617b88a4107b72646" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2plgf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2plgf-eth0" Jul 10 00:36:23.613977 env[1315]: 2025-07-10 00:36:23.560 [INFO][3891] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6571ea4c14588882565bc031474feccb76a8f356384e912617b88a4107b72646" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2plgf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2plgf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--2plgf-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7ca43f8d-e9b6-493a-b482-3d2dc7232c75", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 35, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6571ea4c14588882565bc031474feccb76a8f356384e912617b88a4107b72646", Pod:"coredns-7c65d6cfc9-2plgf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6516e0cc8a9", MAC:"0e:e5:f2:00:32:ca", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:36:23.613977 env[1315]: 2025-07-10 00:36:23.607 [INFO][3891] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6571ea4c14588882565bc031474feccb76a8f356384e912617b88a4107b72646" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2plgf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2plgf-eth0" Jul 10 00:36:23.625591 env[1315]: time="2025-07-10T00:36:23.625511641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:36:23.625591 env[1315]: time="2025-07-10T00:36:23.625551561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:36:23.625591 env[1315]: time="2025-07-10T00:36:23.625561961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:36:23.625950 env[1315]: time="2025-07-10T00:36:23.625865719Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6571ea4c14588882565bc031474feccb76a8f356384e912617b88a4107b72646 pid=3991 runtime=io.containerd.runc.v2 Jul 10 00:36:23.631000 audit[3996]: NETFILTER_CFG table=filter:108 family=2 entries=52 op=nft_register_chain pid=3996 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 00:36:23.631000 audit[3996]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26592 a0=3 a1=ffffdf7bec90 a2=0 a3=ffff81894fa8 items=0 ppid=3478 pid=3996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:23.631000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 00:36:23.656090 systemd-resolved[1236]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:36:23.673502 env[1315]: time="2025-07-10T00:36:23.672854283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2plgf,Uid:7ca43f8d-e9b6-493a-b482-3d2dc7232c75,Namespace:kube-system,Attempt:1,} returns sandbox id \"6571ea4c14588882565bc031474feccb76a8f356384e912617b88a4107b72646\"" Jul 10 00:36:23.675596 kubelet[2094]: E0710 00:36:23.675217 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:23.678770 env[1315]: time="2025-07-10T00:36:23.678730883Z" level=info msg="CreateContainer within sandbox \"6571ea4c14588882565bc031474feccb76a8f356384e912617b88a4107b72646\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:36:23.695412 env[1315]: time="2025-07-10T00:36:23.695342492Z" level=info msg="CreateContainer within sandbox \"6571ea4c14588882565bc031474feccb76a8f356384e912617b88a4107b72646\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"705124d6aa07678475c1133675cc4818e9002ef17a99fd451406155e63fe9ab1\"" Jul 10 00:36:23.695922 env[1315]: time="2025-07-10T00:36:23.695766289Z" level=info msg="StartContainer for \"705124d6aa07678475c1133675cc4818e9002ef17a99fd451406155e63fe9ab1\"" Jul 10 00:36:23.747619 env[1315]: time="2025-07-10T00:36:23.747510380Z" level=info msg="StartContainer for \"705124d6aa07678475c1133675cc4818e9002ef17a99fd451406155e63fe9ab1\" returns successfully" Jul 10 00:36:24.205565 env[1315]: time="2025-07-10T00:36:24.205530774Z" level=info msg="StopPodSandbox for \"99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86\"" Jul 10 00:36:24.206165 env[1315]: time="2025-07-10T00:36:24.205494134Z" level=info msg="StopPodSandbox for \"1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198\"" Jul 10 00:36:24.207100 env[1315]: time="2025-07-10T00:36:24.205495094Z" level=info msg="StopPodSandbox for \"386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2\"" Jul 10 00:36:24.316107 env[1315]: 2025-07-10 00:36:24.263 [INFO][4101] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" Jul 10 00:36:24.316107 env[1315]: 2025-07-10 00:36:24.263 [INFO][4101] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" iface="eth0" netns="/var/run/netns/cni-a6fc3616-6d1c-01b4-144b-4ee56f7aa0fe" Jul 10 00:36:24.316107 env[1315]: 2025-07-10 00:36:24.263 [INFO][4101] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" iface="eth0" netns="/var/run/netns/cni-a6fc3616-6d1c-01b4-144b-4ee56f7aa0fe" Jul 10 00:36:24.316107 env[1315]: 2025-07-10 00:36:24.264 [INFO][4101] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" iface="eth0" netns="/var/run/netns/cni-a6fc3616-6d1c-01b4-144b-4ee56f7aa0fe" Jul 10 00:36:24.316107 env[1315]: 2025-07-10 00:36:24.264 [INFO][4101] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" Jul 10 00:36:24.316107 env[1315]: 2025-07-10 00:36:24.264 [INFO][4101] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" Jul 10 00:36:24.316107 env[1315]: 2025-07-10 00:36:24.298 [INFO][4119] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" HandleID="k8s-pod-network.99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" Workload="localhost-k8s-calico--apiserver--866977c98d--bvxbn-eth0" Jul 10 00:36:24.316107 env[1315]: 2025-07-10 00:36:24.298 [INFO][4119] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:36:24.316107 env[1315]: 2025-07-10 00:36:24.298 [INFO][4119] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:36:24.316107 env[1315]: 2025-07-10 00:36:24.307 [WARNING][4119] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" HandleID="k8s-pod-network.99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" Workload="localhost-k8s-calico--apiserver--866977c98d--bvxbn-eth0" Jul 10 00:36:24.316107 env[1315]: 2025-07-10 00:36:24.307 [INFO][4119] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" HandleID="k8s-pod-network.99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" Workload="localhost-k8s-calico--apiserver--866977c98d--bvxbn-eth0" Jul 10 00:36:24.316107 env[1315]: 2025-07-10 00:36:24.310 [INFO][4119] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:36:24.316107 env[1315]: 2025-07-10 00:36:24.314 [INFO][4101] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" Jul 10 00:36:24.316582 env[1315]: time="2025-07-10T00:36:24.316408328Z" level=info msg="TearDown network for sandbox \"99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86\" successfully" Jul 10 00:36:24.316582 env[1315]: time="2025-07-10T00:36:24.316452407Z" level=info msg="StopPodSandbox for \"99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86\" returns successfully" Jul 10 00:36:24.317040 env[1315]: time="2025-07-10T00:36:24.317009444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866977c98d-bvxbn,Uid:2c85e535-ff6b-4896-a1a8-ba8b4bb49ac5,Namespace:calico-apiserver,Attempt:1,}" Jul 10 00:36:24.323456 kubelet[2094]: E0710 00:36:24.323386 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:24.346247 kubelet[2094]: I0710 00:36:24.346166 2094 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-2plgf" podStartSLOduration=35.346148893 podStartE2EDuration="35.346148893s" podCreationTimestamp="2025-07-10 00:35:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:36:24.33435593 +0000 UTC m=+42.230935111" watchObservedRunningTime="2025-07-10 00:36:24.346148893 +0000 UTC m=+42.242727954" Jul 10 00:36:24.351910 kernel: kauditd_printk_skb: 570 callbacks suppressed Jul 10 00:36:24.352233 kernel: audit: type=1325 audit(1752107784.348:413): table=filter:109 family=2 entries=18 op=nft_register_rule pid=4144 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:24.352716 kernel: audit: type=1300 audit(1752107784.348:413): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc32eeb60 a2=0 a3=1 items=0 ppid=2205 pid=4144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:24.348000 audit[4144]: NETFILTER_CFG table=filter:109 family=2 entries=18 op=nft_register_rule pid=4144 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:24.348000 audit[4144]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc32eeb60 a2=0 a3=1 items=0 ppid=2205 pid=4144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:24.357458 kernel: audit: type=1327 audit(1752107784.348:413): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:24.348000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:24.359307 env[1315]: 2025-07-10 00:36:24.289 [INFO][4102] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" Jul 10 00:36:24.359307 env[1315]: 2025-07-10 00:36:24.291 [INFO][4102] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" iface="eth0" netns="/var/run/netns/cni-f0b66b86-277d-f9b2-4ef9-5dc05ca4a428" Jul 10 00:36:24.359307 env[1315]: 2025-07-10 00:36:24.291 [INFO][4102] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" iface="eth0" netns="/var/run/netns/cni-f0b66b86-277d-f9b2-4ef9-5dc05ca4a428" Jul 10 00:36:24.359307 env[1315]: 2025-07-10 00:36:24.291 [INFO][4102] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" iface="eth0" netns="/var/run/netns/cni-f0b66b86-277d-f9b2-4ef9-5dc05ca4a428" Jul 10 00:36:24.359307 env[1315]: 2025-07-10 00:36:24.291 [INFO][4102] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" Jul 10 00:36:24.359307 env[1315]: 2025-07-10 00:36:24.291 [INFO][4102] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" Jul 10 00:36:24.359307 env[1315]: 2025-07-10 00:36:24.327 [INFO][4127] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" HandleID="k8s-pod-network.386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" Workload="localhost-k8s-coredns--7c65d6cfc9--zl9sp-eth0" Jul 10 00:36:24.359307 env[1315]: 2025-07-10 00:36:24.327 [INFO][4127] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:36:24.359307 env[1315]: 2025-07-10 00:36:24.327 [INFO][4127] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:36:24.359307 env[1315]: 2025-07-10 00:36:24.339 [WARNING][4127] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" HandleID="k8s-pod-network.386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" Workload="localhost-k8s-coredns--7c65d6cfc9--zl9sp-eth0" Jul 10 00:36:24.359307 env[1315]: 2025-07-10 00:36:24.339 [INFO][4127] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" HandleID="k8s-pod-network.386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" Workload="localhost-k8s-coredns--7c65d6cfc9--zl9sp-eth0" Jul 10 00:36:24.359307 env[1315]: 2025-07-10 00:36:24.341 [INFO][4127] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:36:24.359307 env[1315]: 2025-07-10 00:36:24.355 [INFO][4102] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" Jul 10 00:36:24.359729 env[1315]: time="2025-07-10T00:36:24.359469085Z" level=info msg="TearDown network for sandbox \"386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2\" successfully" Jul 10 00:36:24.359729 env[1315]: time="2025-07-10T00:36:24.359558325Z" level=info msg="StopPodSandbox for \"386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2\" returns successfully" Jul 10 00:36:24.359899 kubelet[2094]: E0710 00:36:24.359870 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:24.361170 env[1315]: time="2025-07-10T00:36:24.360607638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zl9sp,Uid:2b825d2b-e0da-4a05-8c42-0b337b179ba3,Namespace:kube-system,Attempt:1,}" Jul 10 00:36:24.362000 audit[4144]: NETFILTER_CFG table=nat:110 family=2 entries=16 op=nft_register_rule pid=4144 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:24.362000 audit[4144]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4236 a0=3 a1=ffffc32eeb60 a2=0 a3=1 items=0 ppid=2205 pid=4144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:24.369550 kernel: audit: type=1325 audit(1752107784.362:414): table=nat:110 family=2 entries=16 op=nft_register_rule pid=4144 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:24.369623 kernel: audit: type=1300 audit(1752107784.362:414): arch=c00000b7 syscall=211 success=yes exit=4236 a0=3 a1=ffffc32eeb60 a2=0 a3=1 items=0 ppid=2205 pid=4144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:24.369655 kernel: audit: type=1327 audit(1752107784.362:414): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:24.362000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:24.381000 audit[4162]: NETFILTER_CFG table=filter:111 family=2 entries=15 op=nft_register_rule pid=4162 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:24.384077 env[1315]: 2025-07-10 00:36:24.300 [INFO][4087] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" Jul 10 00:36:24.384077 env[1315]: 2025-07-10 00:36:24.300 [INFO][4087] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" iface="eth0" netns="/var/run/netns/cni-3fca72d9-b6b6-b812-3539-4ffaded046fd" Jul 10 00:36:24.384077 env[1315]: 2025-07-10 00:36:24.300 [INFO][4087] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" iface="eth0" netns="/var/run/netns/cni-3fca72d9-b6b6-b812-3539-4ffaded046fd" Jul 10 00:36:24.384077 env[1315]: 2025-07-10 00:36:24.300 [INFO][4087] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" iface="eth0" netns="/var/run/netns/cni-3fca72d9-b6b6-b812-3539-4ffaded046fd" Jul 10 00:36:24.384077 env[1315]: 2025-07-10 00:36:24.300 [INFO][4087] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" Jul 10 00:36:24.384077 env[1315]: 2025-07-10 00:36:24.300 [INFO][4087] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" Jul 10 00:36:24.384077 env[1315]: 2025-07-10 00:36:24.348 [INFO][4135] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" HandleID="k8s-pod-network.1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" Workload="localhost-k8s-csi--node--driver--fxvwp-eth0" Jul 10 00:36:24.384077 env[1315]: 2025-07-10 00:36:24.349 [INFO][4135] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:36:24.384077 env[1315]: 2025-07-10 00:36:24.360 [INFO][4135] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:36:24.384077 env[1315]: 2025-07-10 00:36:24.374 [WARNING][4135] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" HandleID="k8s-pod-network.1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" Workload="localhost-k8s-csi--node--driver--fxvwp-eth0" Jul 10 00:36:24.384077 env[1315]: 2025-07-10 00:36:24.374 [INFO][4135] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" HandleID="k8s-pod-network.1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" Workload="localhost-k8s-csi--node--driver--fxvwp-eth0" Jul 10 00:36:24.384077 env[1315]: 2025-07-10 00:36:24.378 [INFO][4135] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:36:24.384077 env[1315]: 2025-07-10 00:36:24.381 [INFO][4087] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" Jul 10 00:36:24.381000 audit[4162]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffe6a04540 a2=0 a3=1 items=0 ppid=2205 pid=4162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:24.388359 kernel: audit: type=1325 audit(1752107784.381:415): table=filter:111 family=2 entries=15 op=nft_register_rule pid=4162 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:24.388392 kernel: audit: type=1300 audit(1752107784.381:415): arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffe6a04540 a2=0 a3=1 items=0 ppid=2205 pid=4162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:24.388411 kernel: audit: type=1327 audit(1752107784.381:415): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:24.381000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:24.388562 env[1315]: time="2025-07-10T00:36:24.388525935Z" level=info msg="TearDown network for sandbox \"1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198\" successfully" Jul 10 00:36:24.388645 env[1315]: time="2025-07-10T00:36:24.388628734Z" level=info msg="StopPodSandbox for \"1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198\" returns successfully" Jul 10 00:36:24.389573 env[1315]: time="2025-07-10T00:36:24.389326530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxvwp,Uid:d132b5af-8e1a-4884-a0af-6e4f358a849a,Namespace:calico-system,Attempt:1,}" Jul 10 00:36:24.392000 audit[4162]: NETFILTER_CFG table=nat:112 family=2 entries=37 op=nft_register_chain pid=4162 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:24.392000 audit[4162]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14964 a0=3 a1=ffffe6a04540 a2=0 a3=1 items=0 ppid=2205 pid=4162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:24.392000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:24.395460 kernel: audit: type=1325 audit(1752107784.392:416): table=nat:112 family=2 entries=37 op=nft_register_chain pid=4162 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:24.475842 systemd[1]: run-netns-cni\x2d3fca72d9\x2db6b6\x2db812\x2d3539\x2d4ffaded046fd.mount: Deactivated successfully. Jul 10 00:36:24.475983 systemd[1]: run-netns-cni\x2da6fc3616\x2d6d1c\x2d01b4\x2d144b\x2d4ee56f7aa0fe.mount: Deactivated successfully. Jul 10 00:36:24.476073 systemd[1]: run-netns-cni\x2df0b66b86\x2d277d\x2df9b2\x2d4ef9\x2d5dc05ca4a428.mount: Deactivated successfully. Jul 10 00:36:24.522215 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 10 00:36:24.522316 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali9d5b822b871: link becomes ready Jul 10 00:36:24.521526 systemd-networkd[1098]: cali9d5b822b871: Link UP Jul 10 00:36:24.521704 systemd-networkd[1098]: cali9d5b822b871: Gained carrier Jul 10 00:36:24.534882 env[1315]: 2025-07-10 00:36:24.425 [INFO][4146] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--866977c98d--bvxbn-eth0 calico-apiserver-866977c98d- calico-apiserver 2c85e535-ff6b-4896-a1a8-ba8b4bb49ac5 1037 0 2025-07-10 00:35:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:866977c98d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-866977c98d-bvxbn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9d5b822b871 [] [] }} ContainerID="044aa093e9e11461272206a37bd50161eb2616b8d97f2bee78c4a99ec4c0c216" Namespace="calico-apiserver" Pod="calico-apiserver-866977c98d-bvxbn" WorkloadEndpoint="localhost-k8s-calico--apiserver--866977c98d--bvxbn-" Jul 10 00:36:24.534882 env[1315]: 2025-07-10 00:36:24.425 [INFO][4146] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="044aa093e9e11461272206a37bd50161eb2616b8d97f2bee78c4a99ec4c0c216" Namespace="calico-apiserver" Pod="calico-apiserver-866977c98d-bvxbn" WorkloadEndpoint="localhost-k8s-calico--apiserver--866977c98d--bvxbn-eth0" Jul 10 00:36:24.534882 env[1315]: 2025-07-10 00:36:24.475 [INFO][4197] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="044aa093e9e11461272206a37bd50161eb2616b8d97f2bee78c4a99ec4c0c216" HandleID="k8s-pod-network.044aa093e9e11461272206a37bd50161eb2616b8d97f2bee78c4a99ec4c0c216" Workload="localhost-k8s-calico--apiserver--866977c98d--bvxbn-eth0" Jul 10 00:36:24.534882 env[1315]: 2025-07-10 00:36:24.476 [INFO][4197] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="044aa093e9e11461272206a37bd50161eb2616b8d97f2bee78c4a99ec4c0c216" HandleID="k8s-pod-network.044aa093e9e11461272206a37bd50161eb2616b8d97f2bee78c4a99ec4c0c216" Workload="localhost-k8s-calico--apiserver--866977c98d--bvxbn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a2e30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-866977c98d-bvxbn", "timestamp":"2025-07-10 00:36:24.475935442 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:36:24.534882 env[1315]: 2025-07-10 00:36:24.476 [INFO][4197] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:36:24.534882 env[1315]: 2025-07-10 00:36:24.476 [INFO][4197] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:36:24.534882 env[1315]: 2025-07-10 00:36:24.476 [INFO][4197] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:36:24.534882 env[1315]: 2025-07-10 00:36:24.486 [INFO][4197] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.044aa093e9e11461272206a37bd50161eb2616b8d97f2bee78c4a99ec4c0c216" host="localhost" Jul 10 00:36:24.534882 env[1315]: 2025-07-10 00:36:24.492 [INFO][4197] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:36:24.534882 env[1315]: 2025-07-10 00:36:24.498 [INFO][4197] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:36:24.534882 env[1315]: 2025-07-10 00:36:24.500 [INFO][4197] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:36:24.534882 env[1315]: 2025-07-10 00:36:24.502 [INFO][4197] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:36:24.534882 env[1315]: 2025-07-10 00:36:24.502 [INFO][4197] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.044aa093e9e11461272206a37bd50161eb2616b8d97f2bee78c4a99ec4c0c216" host="localhost" Jul 10 00:36:24.534882 env[1315]: 2025-07-10 00:36:24.504 [INFO][4197] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.044aa093e9e11461272206a37bd50161eb2616b8d97f2bee78c4a99ec4c0c216 Jul 10 00:36:24.534882 env[1315]: 2025-07-10 00:36:24.508 [INFO][4197] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.044aa093e9e11461272206a37bd50161eb2616b8d97f2bee78c4a99ec4c0c216" host="localhost" Jul 10 00:36:24.534882 env[1315]: 2025-07-10 00:36:24.513 [INFO][4197] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.044aa093e9e11461272206a37bd50161eb2616b8d97f2bee78c4a99ec4c0c216" host="localhost" Jul 10 00:36:24.534882 env[1315]: 2025-07-10 00:36:24.514 [INFO][4197] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.044aa093e9e11461272206a37bd50161eb2616b8d97f2bee78c4a99ec4c0c216" host="localhost" Jul 10 00:36:24.534882 env[1315]: 2025-07-10 00:36:24.514 [INFO][4197] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:36:24.534882 env[1315]: 2025-07-10 00:36:24.514 [INFO][4197] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="044aa093e9e11461272206a37bd50161eb2616b8d97f2bee78c4a99ec4c0c216" HandleID="k8s-pod-network.044aa093e9e11461272206a37bd50161eb2616b8d97f2bee78c4a99ec4c0c216" Workload="localhost-k8s-calico--apiserver--866977c98d--bvxbn-eth0" Jul 10 00:36:24.535540 env[1315]: 2025-07-10 00:36:24.517 [INFO][4146] cni-plugin/k8s.go 418: Populated endpoint ContainerID="044aa093e9e11461272206a37bd50161eb2616b8d97f2bee78c4a99ec4c0c216" Namespace="calico-apiserver" Pod="calico-apiserver-866977c98d-bvxbn" WorkloadEndpoint="localhost-k8s-calico--apiserver--866977c98d--bvxbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--866977c98d--bvxbn-eth0", GenerateName:"calico-apiserver-866977c98d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2c85e535-ff6b-4896-a1a8-ba8b4bb49ac5", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 35, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"866977c98d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-866977c98d-bvxbn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d5b822b871", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:36:24.535540 env[1315]: 2025-07-10 00:36:24.517 [INFO][4146] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="044aa093e9e11461272206a37bd50161eb2616b8d97f2bee78c4a99ec4c0c216" Namespace="calico-apiserver" Pod="calico-apiserver-866977c98d-bvxbn" WorkloadEndpoint="localhost-k8s-calico--apiserver--866977c98d--bvxbn-eth0" Jul 10 00:36:24.535540 env[1315]: 2025-07-10 00:36:24.517 [INFO][4146] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d5b822b871 ContainerID="044aa093e9e11461272206a37bd50161eb2616b8d97f2bee78c4a99ec4c0c216" Namespace="calico-apiserver" Pod="calico-apiserver-866977c98d-bvxbn" WorkloadEndpoint="localhost-k8s-calico--apiserver--866977c98d--bvxbn-eth0" Jul 10 00:36:24.535540 env[1315]: 2025-07-10 00:36:24.521 [INFO][4146] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="044aa093e9e11461272206a37bd50161eb2616b8d97f2bee78c4a99ec4c0c216" Namespace="calico-apiserver" Pod="calico-apiserver-866977c98d-bvxbn" WorkloadEndpoint="localhost-k8s-calico--apiserver--866977c98d--bvxbn-eth0" Jul 10 00:36:24.535540 env[1315]: 2025-07-10 00:36:24.521 [INFO][4146] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="044aa093e9e11461272206a37bd50161eb2616b8d97f2bee78c4a99ec4c0c216" Namespace="calico-apiserver" Pod="calico-apiserver-866977c98d-bvxbn" WorkloadEndpoint="localhost-k8s-calico--apiserver--866977c98d--bvxbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--866977c98d--bvxbn-eth0", GenerateName:"calico-apiserver-866977c98d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2c85e535-ff6b-4896-a1a8-ba8b4bb49ac5", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 35, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"866977c98d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"044aa093e9e11461272206a37bd50161eb2616b8d97f2bee78c4a99ec4c0c216", Pod:"calico-apiserver-866977c98d-bvxbn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d5b822b871", MAC:"36:08:1a:5c:ab:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:36:24.535540 env[1315]: 2025-07-10 00:36:24.532 [INFO][4146] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="044aa093e9e11461272206a37bd50161eb2616b8d97f2bee78c4a99ec4c0c216" Namespace="calico-apiserver" Pod="calico-apiserver-866977c98d-bvxbn" WorkloadEndpoint="localhost-k8s-calico--apiserver--866977c98d--bvxbn-eth0" Jul 10 00:36:24.540000 audit[4232]: NETFILTER_CFG table=filter:113 family=2 entries=54 op=nft_register_chain pid=4232 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 00:36:24.540000 audit[4232]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29380 a0=3 a1=fffffdd7a8a0 a2=0 a3=ffffa4bb1fa8 items=0 ppid=3478 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:24.540000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 00:36:24.547490 env[1315]: time="2025-07-10T00:36:24.547373494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:36:24.547490 env[1315]: time="2025-07-10T00:36:24.547411814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:36:24.547712 env[1315]: time="2025-07-10T00:36:24.547655173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:36:24.547930 env[1315]: time="2025-07-10T00:36:24.547894051Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/044aa093e9e11461272206a37bd50161eb2616b8d97f2bee78c4a99ec4c0c216 pid=4239 runtime=io.containerd.runc.v2 Jul 10 00:36:24.650037 systemd-networkd[1098]: cali1e5b0a0a761: Link UP Jul 10 00:36:24.650491 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali1e5b0a0a761: link becomes ready Jul 10 00:36:24.651812 systemd-networkd[1098]: cali1e5b0a0a761: Gained carrier Jul 10 00:36:24.654331 systemd-resolved[1236]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:36:24.675754 env[1315]: 2025-07-10 00:36:24.441 [INFO][4163] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--zl9sp-eth0 coredns-7c65d6cfc9- kube-system 2b825d2b-e0da-4a05-8c42-0b337b179ba3 1038 0 2025-07-10 00:35:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-zl9sp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1e5b0a0a761 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="54e5600895ff7226c89cb6c54a555a927eec0f757e1c63994945a15be4603a5e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zl9sp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zl9sp-" Jul 10 00:36:24.675754 env[1315]: 2025-07-10 00:36:24.441 [INFO][4163] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="54e5600895ff7226c89cb6c54a555a927eec0f757e1c63994945a15be4603a5e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zl9sp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zl9sp-eth0" Jul 10 00:36:24.675754 env[1315]: 2025-07-10 00:36:24.492 [INFO][4204] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="54e5600895ff7226c89cb6c54a555a927eec0f757e1c63994945a15be4603a5e" HandleID="k8s-pod-network.54e5600895ff7226c89cb6c54a555a927eec0f757e1c63994945a15be4603a5e" Workload="localhost-k8s-coredns--7c65d6cfc9--zl9sp-eth0" Jul 10 00:36:24.675754 env[1315]: 2025-07-10 00:36:24.492 [INFO][4204] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="54e5600895ff7226c89cb6c54a555a927eec0f757e1c63994945a15be4603a5e" HandleID="k8s-pod-network.54e5600895ff7226c89cb6c54a555a927eec0f757e1c63994945a15be4603a5e" Workload="localhost-k8s-coredns--7c65d6cfc9--zl9sp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000132a80), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-zl9sp", "timestamp":"2025-07-10 00:36:24.492236296 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:36:24.675754 env[1315]: 2025-07-10 00:36:24.492 [INFO][4204] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:36:24.675754 env[1315]: 2025-07-10 00:36:24.514 [INFO][4204] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:36:24.675754 env[1315]: 2025-07-10 00:36:24.514 [INFO][4204] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:36:24.675754 env[1315]: 2025-07-10 00:36:24.588 [INFO][4204] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.54e5600895ff7226c89cb6c54a555a927eec0f757e1c63994945a15be4603a5e" host="localhost" Jul 10 00:36:24.675754 env[1315]: 2025-07-10 00:36:24.593 [INFO][4204] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:36:24.675754 env[1315]: 2025-07-10 00:36:24.598 [INFO][4204] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:36:24.675754 env[1315]: 2025-07-10 00:36:24.601 [INFO][4204] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:36:24.675754 env[1315]: 2025-07-10 00:36:24.603 [INFO][4204] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:36:24.675754 env[1315]: 2025-07-10 00:36:24.603 [INFO][4204] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.54e5600895ff7226c89cb6c54a555a927eec0f757e1c63994945a15be4603a5e" host="localhost" Jul 10 00:36:24.675754 env[1315]: 2025-07-10 00:36:24.607 [INFO][4204] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.54e5600895ff7226c89cb6c54a555a927eec0f757e1c63994945a15be4603a5e Jul 10 00:36:24.675754 env[1315]: 2025-07-10 00:36:24.612 [INFO][4204] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.54e5600895ff7226c89cb6c54a555a927eec0f757e1c63994945a15be4603a5e" host="localhost" Jul 10 00:36:24.675754 env[1315]: 2025-07-10 00:36:24.627 [INFO][4204] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.54e5600895ff7226c89cb6c54a555a927eec0f757e1c63994945a15be4603a5e" host="localhost" Jul 10 00:36:24.675754 env[1315]: 2025-07-10 00:36:24.627 [INFO][4204] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.54e5600895ff7226c89cb6c54a555a927eec0f757e1c63994945a15be4603a5e" host="localhost" Jul 10 00:36:24.675754 env[1315]: 2025-07-10 00:36:24.627 [INFO][4204] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:36:24.675754 env[1315]: 2025-07-10 00:36:24.627 [INFO][4204] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="54e5600895ff7226c89cb6c54a555a927eec0f757e1c63994945a15be4603a5e" HandleID="k8s-pod-network.54e5600895ff7226c89cb6c54a555a927eec0f757e1c63994945a15be4603a5e" Workload="localhost-k8s-coredns--7c65d6cfc9--zl9sp-eth0" Jul 10 00:36:24.679515 env[1315]: 2025-07-10 00:36:24.640 [INFO][4163] cni-plugin/k8s.go 418: Populated endpoint ContainerID="54e5600895ff7226c89cb6c54a555a927eec0f757e1c63994945a15be4603a5e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zl9sp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zl9sp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--zl9sp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2b825d2b-e0da-4a05-8c42-0b337b179ba3", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 35, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-zl9sp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e5b0a0a761", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:36:24.679515 env[1315]: 2025-07-10 00:36:24.640 [INFO][4163] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="54e5600895ff7226c89cb6c54a555a927eec0f757e1c63994945a15be4603a5e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zl9sp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zl9sp-eth0" Jul 10 00:36:24.679515 env[1315]: 2025-07-10 00:36:24.640 [INFO][4163] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e5b0a0a761 ContainerID="54e5600895ff7226c89cb6c54a555a927eec0f757e1c63994945a15be4603a5e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zl9sp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zl9sp-eth0" Jul 10 00:36:24.679515 env[1315]: 2025-07-10 00:36:24.650 [INFO][4163] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="54e5600895ff7226c89cb6c54a555a927eec0f757e1c63994945a15be4603a5e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zl9sp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zl9sp-eth0" Jul 10 00:36:24.679515 env[1315]: 2025-07-10 00:36:24.652 [INFO][4163] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="54e5600895ff7226c89cb6c54a555a927eec0f757e1c63994945a15be4603a5e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zl9sp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zl9sp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--zl9sp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2b825d2b-e0da-4a05-8c42-0b337b179ba3", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 35, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"54e5600895ff7226c89cb6c54a555a927eec0f757e1c63994945a15be4603a5e", Pod:"coredns-7c65d6cfc9-zl9sp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e5b0a0a761", MAC:"32:a8:ed:a8:a3:74", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:36:24.679515 env[1315]: 2025-07-10 00:36:24.666 [INFO][4163] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="54e5600895ff7226c89cb6c54a555a927eec0f757e1c63994945a15be4603a5e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zl9sp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zl9sp-eth0" Jul 10 00:36:24.689000 audit[4281]: NETFILTER_CFG table=filter:114 family=2 entries=40 op=nft_register_chain pid=4281 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 00:36:24.689000 audit[4281]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20328 a0=3 a1=fffff6b28300 a2=0 a3=ffffb07d6fa8 items=0 ppid=3478 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:24.689000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 00:36:24.706631 env[1315]: time="2025-07-10T00:36:24.706555971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:36:24.706631 env[1315]: time="2025-07-10T00:36:24.706603051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:36:24.706824 env[1315]: time="2025-07-10T00:36:24.706613571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:36:24.707053 env[1315]: time="2025-07-10T00:36:24.707017208Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/54e5600895ff7226c89cb6c54a555a927eec0f757e1c63994945a15be4603a5e pid=4294 runtime=io.containerd.runc.v2 Jul 10 00:36:24.716489 env[1315]: time="2025-07-10T00:36:24.713714125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866977c98d-bvxbn,Uid:2c85e535-ff6b-4896-a1a8-ba8b4bb49ac5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"044aa093e9e11461272206a37bd50161eb2616b8d97f2bee78c4a99ec4c0c216\"" Jul 10 00:36:24.741467 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calife3656df14d: link becomes ready Jul 10 00:36:24.739741 systemd-networkd[1098]: calife3656df14d: Link UP Jul 10 00:36:24.740291 systemd-networkd[1098]: calife3656df14d: Gained carrier Jul 10 00:36:24.778879 systemd-resolved[1236]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:36:24.792456 env[1315]: 2025-07-10 00:36:24.464 [INFO][4174] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--fxvwp-eth0 csi-node-driver- calico-system d132b5af-8e1a-4884-a0af-6e4f358a849a 1039 0 2025-07-10 00:36:02 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-fxvwp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calife3656df14d [] [] }} ContainerID="3f49ede495569ac49de4d859606ab87018c37d1229ccf1639377e9b6f81f1c1e" Namespace="calico-system" Pod="csi-node-driver-fxvwp" WorkloadEndpoint="localhost-k8s-csi--node--driver--fxvwp-" Jul 10 00:36:24.792456 env[1315]: 2025-07-10 00:36:24.464 [INFO][4174] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3f49ede495569ac49de4d859606ab87018c37d1229ccf1639377e9b6f81f1c1e" Namespace="calico-system" Pod="csi-node-driver-fxvwp" WorkloadEndpoint="localhost-k8s-csi--node--driver--fxvwp-eth0" Jul 10 00:36:24.792456 env[1315]: 2025-07-10 00:36:24.520 [INFO][4212] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f49ede495569ac49de4d859606ab87018c37d1229ccf1639377e9b6f81f1c1e" HandleID="k8s-pod-network.3f49ede495569ac49de4d859606ab87018c37d1229ccf1639377e9b6f81f1c1e" Workload="localhost-k8s-csi--node--driver--fxvwp-eth0" Jul 10 00:36:24.792456 env[1315]: 2025-07-10 00:36:24.520 [INFO][4212] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3f49ede495569ac49de4d859606ab87018c37d1229ccf1639377e9b6f81f1c1e" HandleID="k8s-pod-network.3f49ede495569ac49de4d859606ab87018c37d1229ccf1639377e9b6f81f1c1e" Workload="localhost-k8s-csi--node--driver--fxvwp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001194b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-fxvwp", "timestamp":"2025-07-10 00:36:24.520159353 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:36:24.792456 env[1315]: 2025-07-10 00:36:24.520 [INFO][4212] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:36:24.792456 env[1315]: 2025-07-10 00:36:24.627 [INFO][4212] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:36:24.792456 env[1315]: 2025-07-10 00:36:24.628 [INFO][4212] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:36:24.792456 env[1315]: 2025-07-10 00:36:24.688 [INFO][4212] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3f49ede495569ac49de4d859606ab87018c37d1229ccf1639377e9b6f81f1c1e" host="localhost" Jul 10 00:36:24.792456 env[1315]: 2025-07-10 00:36:24.695 [INFO][4212] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:36:24.792456 env[1315]: 2025-07-10 00:36:24.700 [INFO][4212] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:36:24.792456 env[1315]: 2025-07-10 00:36:24.702 [INFO][4212] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:36:24.792456 env[1315]: 2025-07-10 00:36:24.704 [INFO][4212] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:36:24.792456 env[1315]: 2025-07-10 00:36:24.704 [INFO][4212] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3f49ede495569ac49de4d859606ab87018c37d1229ccf1639377e9b6f81f1c1e" host="localhost" Jul 10 00:36:24.792456 env[1315]: 2025-07-10 00:36:24.712 [INFO][4212] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3f49ede495569ac49de4d859606ab87018c37d1229ccf1639377e9b6f81f1c1e Jul 10 00:36:24.792456 env[1315]: 2025-07-10 00:36:24.722 [INFO][4212] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3f49ede495569ac49de4d859606ab87018c37d1229ccf1639377e9b6f81f1c1e" host="localhost" Jul 10 00:36:24.792456 env[1315]: 2025-07-10 00:36:24.731 [INFO][4212] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.3f49ede495569ac49de4d859606ab87018c37d1229ccf1639377e9b6f81f1c1e" host="localhost" Jul 10 00:36:24.792456 env[1315]: 2025-07-10 00:36:24.732 [INFO][4212] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.3f49ede495569ac49de4d859606ab87018c37d1229ccf1639377e9b6f81f1c1e" host="localhost" Jul 10 00:36:24.792456 env[1315]: 2025-07-10 00:36:24.732 [INFO][4212] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:36:24.792456 env[1315]: 2025-07-10 00:36:24.732 [INFO][4212] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="3f49ede495569ac49de4d859606ab87018c37d1229ccf1639377e9b6f81f1c1e" HandleID="k8s-pod-network.3f49ede495569ac49de4d859606ab87018c37d1229ccf1639377e9b6f81f1c1e" Workload="localhost-k8s-csi--node--driver--fxvwp-eth0" Jul 10 00:36:24.791000 audit[4326]: NETFILTER_CFG table=filter:115 family=2 entries=54 op=nft_register_chain pid=4326 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 00:36:24.791000 audit[4326]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25976 a0=3 a1=fffff3ad0240 a2=0 a3=ffff81276fa8 items=0 ppid=3478 pid=4326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:24.791000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 00:36:24.793186 env[1315]: 2025-07-10 00:36:24.736 [INFO][4174] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3f49ede495569ac49de4d859606ab87018c37d1229ccf1639377e9b6f81f1c1e" Namespace="calico-system" Pod="csi-node-driver-fxvwp" WorkloadEndpoint="localhost-k8s-csi--node--driver--fxvwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fxvwp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d132b5af-8e1a-4884-a0af-6e4f358a849a", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 36, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-fxvwp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calife3656df14d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:36:24.793186 env[1315]: 2025-07-10 00:36:24.736 [INFO][4174] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="3f49ede495569ac49de4d859606ab87018c37d1229ccf1639377e9b6f81f1c1e" Namespace="calico-system" Pod="csi-node-driver-fxvwp" WorkloadEndpoint="localhost-k8s-csi--node--driver--fxvwp-eth0" Jul 10 00:36:24.793186 env[1315]: 2025-07-10 00:36:24.736 [INFO][4174] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calife3656df14d ContainerID="3f49ede495569ac49de4d859606ab87018c37d1229ccf1639377e9b6f81f1c1e" Namespace="calico-system" Pod="csi-node-driver-fxvwp" WorkloadEndpoint="localhost-k8s-csi--node--driver--fxvwp-eth0" Jul 10 00:36:24.793186 env[1315]: 2025-07-10 00:36:24.757 [INFO][4174] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f49ede495569ac49de4d859606ab87018c37d1229ccf1639377e9b6f81f1c1e" Namespace="calico-system" Pod="csi-node-driver-fxvwp" WorkloadEndpoint="localhost-k8s-csi--node--driver--fxvwp-eth0" Jul 10 00:36:24.793186 env[1315]: 2025-07-10 00:36:24.760 [INFO][4174] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3f49ede495569ac49de4d859606ab87018c37d1229ccf1639377e9b6f81f1c1e" Namespace="calico-system" Pod="csi-node-driver-fxvwp" WorkloadEndpoint="localhost-k8s-csi--node--driver--fxvwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fxvwp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d132b5af-8e1a-4884-a0af-6e4f358a849a", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 36, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3f49ede495569ac49de4d859606ab87018c37d1229ccf1639377e9b6f81f1c1e", Pod:"csi-node-driver-fxvwp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calife3656df14d", MAC:"66:45:83:5d:1f:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:36:24.793186 env[1315]: 2025-07-10 00:36:24.776 [INFO][4174] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3f49ede495569ac49de4d859606ab87018c37d1229ccf1639377e9b6f81f1c1e" Namespace="calico-system" Pod="csi-node-driver-fxvwp" WorkloadEndpoint="localhost-k8s-csi--node--driver--fxvwp-eth0" Jul 10 00:36:24.815480 env[1315]: time="2025-07-10T00:36:24.815414218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zl9sp,Uid:2b825d2b-e0da-4a05-8c42-0b337b179ba3,Namespace:kube-system,Attempt:1,} returns sandbox id \"54e5600895ff7226c89cb6c54a555a927eec0f757e1c63994945a15be4603a5e\"" Jul 10 00:36:24.816299 kubelet[2094]: E0710 00:36:24.816258 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:24.819898 env[1315]: time="2025-07-10T00:36:24.819855989Z" level=info msg="CreateContainer within sandbox \"54e5600895ff7226c89cb6c54a555a927eec0f757e1c63994945a15be4603a5e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:36:24.832488 env[1315]: time="2025-07-10T00:36:24.830949276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:36:24.832488 env[1315]: time="2025-07-10T00:36:24.831006636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:36:24.832488 env[1315]: time="2025-07-10T00:36:24.831017196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:36:24.832488 env[1315]: time="2025-07-10T00:36:24.831202715Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f49ede495569ac49de4d859606ab87018c37d1229ccf1639377e9b6f81f1c1e pid=4348 runtime=io.containerd.runc.v2 Jul 10 00:36:24.841886 env[1315]: time="2025-07-10T00:36:24.841825445Z" level=info msg="CreateContainer within sandbox \"54e5600895ff7226c89cb6c54a555a927eec0f757e1c63994945a15be4603a5e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"88571a3b99a70e35f6e5acd430d6b89e4d29cf0d2556dc21e4257985f19fc18a\"" Jul 10 00:36:24.842396 env[1315]: time="2025-07-10T00:36:24.842343322Z" level=info msg="StartContainer for \"88571a3b99a70e35f6e5acd430d6b89e4d29cf0d2556dc21e4257985f19fc18a\"" Jul 10 00:36:24.906185 systemd-resolved[1236]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:36:24.923055 env[1315]: time="2025-07-10T00:36:24.922994273Z" level=info msg="StartContainer for \"88571a3b99a70e35f6e5acd430d6b89e4d29cf0d2556dc21e4257985f19fc18a\" returns successfully" Jul 10 00:36:24.927035 env[1315]: time="2025-07-10T00:36:24.926997167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxvwp,Uid:d132b5af-8e1a-4884-a0af-6e4f358a849a,Namespace:calico-system,Attempt:1,} returns sandbox id \"3f49ede495569ac49de4d859606ab87018c37d1229ccf1639377e9b6f81f1c1e\"" Jul 10 00:36:24.932580 systemd-networkd[1098]: cali6c37deeddec: Gained IPv6LL Jul 10 00:36:25.039971 kubelet[2094]: I0710 00:36:25.038985 2094 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:36:25.203962 env[1315]: time="2025-07-10T00:36:25.203892347Z" level=info msg="StopPodSandbox for \"9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464\"" Jul 10 00:36:25.251925 systemd-networkd[1098]: cali6516e0cc8a9: Gained IPv6LL Jul 10 00:36:25.323037 env[1315]: 2025-07-10 00:36:25.286 [INFO][4475] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" Jul 10 00:36:25.323037 env[1315]: 2025-07-10 00:36:25.286 [INFO][4475] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" iface="eth0" netns="/var/run/netns/cni-4fd36e40-4957-b11b-f0ad-f653cd2dc610" Jul 10 00:36:25.323037 env[1315]: 2025-07-10 00:36:25.286 [INFO][4475] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" iface="eth0" netns="/var/run/netns/cni-4fd36e40-4957-b11b-f0ad-f653cd2dc610" Jul 10 00:36:25.323037 env[1315]: 2025-07-10 00:36:25.286 [INFO][4475] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" iface="eth0" netns="/var/run/netns/cni-4fd36e40-4957-b11b-f0ad-f653cd2dc610" Jul 10 00:36:25.323037 env[1315]: 2025-07-10 00:36:25.286 [INFO][4475] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" Jul 10 00:36:25.323037 env[1315]: 2025-07-10 00:36:25.286 [INFO][4475] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" Jul 10 00:36:25.323037 env[1315]: 2025-07-10 00:36:25.308 [INFO][4484] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" HandleID="k8s-pod-network.9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" Workload="localhost-k8s-calico--kube--controllers--6946f6c79d--88gpt-eth0" Jul 10 00:36:25.323037 env[1315]: 2025-07-10 00:36:25.308 [INFO][4484] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:36:25.323037 env[1315]: 2025-07-10 00:36:25.308 [INFO][4484] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:36:25.323037 env[1315]: 2025-07-10 00:36:25.316 [WARNING][4484] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" HandleID="k8s-pod-network.9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" Workload="localhost-k8s-calico--kube--controllers--6946f6c79d--88gpt-eth0" Jul 10 00:36:25.323037 env[1315]: 2025-07-10 00:36:25.316 [INFO][4484] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" HandleID="k8s-pod-network.9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" Workload="localhost-k8s-calico--kube--controllers--6946f6c79d--88gpt-eth0" Jul 10 00:36:25.323037 env[1315]: 2025-07-10 00:36:25.318 [INFO][4484] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:36:25.323037 env[1315]: 2025-07-10 00:36:25.319 [INFO][4475] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" Jul 10 00:36:25.323037 env[1315]: time="2025-07-10T00:36:25.322963787Z" level=info msg="TearDown network for sandbox \"9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464\" successfully" Jul 10 00:36:25.323037 env[1315]: time="2025-07-10T00:36:25.322995707Z" level=info msg="StopPodSandbox for \"9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464\" returns successfully" Jul 10 00:36:25.323876 env[1315]: time="2025-07-10T00:36:25.323843661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6946f6c79d-88gpt,Uid:a753c3a5-5f12-42b7-a570-848c45ac60a4,Namespace:calico-system,Attempt:1,}" Jul 10 00:36:25.328770 kubelet[2094]: E0710 00:36:25.328727 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:25.333443 kubelet[2094]: E0710 00:36:25.333387 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:25.382251 kubelet[2094]: I0710 00:36:25.382181 2094 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-zl9sp" podStartSLOduration=36.382161609 podStartE2EDuration="36.382161609s" podCreationTimestamp="2025-07-10 00:35:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:36:25.348786702 +0000 UTC m=+43.245365763" watchObservedRunningTime="2025-07-10 00:36:25.382161609 +0000 UTC m=+43.278740670" Jul 10 00:36:25.420000 audit[4504]: NETFILTER_CFG table=filter:116 family=2 entries=12 op=nft_register_rule pid=4504 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:25.420000 audit[4504]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=fffffe8a1b80 a2=0 a3=1 items=0 ppid=2205 pid=4504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:25.420000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:25.427000 audit[4504]: NETFILTER_CFG table=nat:117 family=2 entries=46 op=nft_register_rule pid=4504 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:25.427000 audit[4504]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14964 a0=3 a1=fffffe8a1b80 a2=0 a3=1 items=0 ppid=2205 pid=4504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:25.427000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:25.473853 systemd[1]: run-containerd-runc-k8s.io-54e5600895ff7226c89cb6c54a555a927eec0f757e1c63994945a15be4603a5e-runc.6rfPeW.mount: Deactivated successfully. Jul 10 00:36:25.474018 systemd[1]: run-netns-cni\x2d4fd36e40\x2d4957\x2db11b\x2df0ad\x2df653cd2dc610.mount: Deactivated successfully. Jul 10 00:36:25.514730 systemd-networkd[1098]: calic772ce8e667: Link UP Jul 10 00:36:25.516489 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic772ce8e667: link becomes ready Jul 10 00:36:25.516026 systemd-networkd[1098]: calic772ce8e667: Gained carrier Jul 10 00:36:25.539627 env[1315]: 2025-07-10 00:36:25.445 [INFO][4491] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6946f6c79d--88gpt-eth0 calico-kube-controllers-6946f6c79d- calico-system a753c3a5-5f12-42b7-a570-848c45ac60a4 1075 0 2025-07-10 00:36:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6946f6c79d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6946f6c79d-88gpt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic772ce8e667 [] [] }} ContainerID="e50c56e443176676df2e01b1b7c5c641b65014465ce0b76b75cbd160f4349a92" Namespace="calico-system" Pod="calico-kube-controllers-6946f6c79d-88gpt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6946f6c79d--88gpt-" Jul 10 00:36:25.539627 env[1315]: 2025-07-10 00:36:25.445 [INFO][4491] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e50c56e443176676df2e01b1b7c5c641b65014465ce0b76b75cbd160f4349a92" Namespace="calico-system" Pod="calico-kube-controllers-6946f6c79d-88gpt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6946f6c79d--88gpt-eth0" Jul 10 00:36:25.539627 env[1315]: 2025-07-10 00:36:25.472 [INFO][4508] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e50c56e443176676df2e01b1b7c5c641b65014465ce0b76b75cbd160f4349a92" HandleID="k8s-pod-network.e50c56e443176676df2e01b1b7c5c641b65014465ce0b76b75cbd160f4349a92" Workload="localhost-k8s-calico--kube--controllers--6946f6c79d--88gpt-eth0" Jul 10 00:36:25.539627 env[1315]: 2025-07-10 00:36:25.472 [INFO][4508] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e50c56e443176676df2e01b1b7c5c641b65014465ce0b76b75cbd160f4349a92" HandleID="k8s-pod-network.e50c56e443176676df2e01b1b7c5c641b65014465ce0b76b75cbd160f4349a92" Workload="localhost-k8s-calico--kube--controllers--6946f6c79d--88gpt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000352fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6946f6c79d-88gpt", "timestamp":"2025-07-10 00:36:25.472230554 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:36:25.539627 env[1315]: 2025-07-10 00:36:25.472 [INFO][4508] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:36:25.539627 env[1315]: 2025-07-10 00:36:25.472 [INFO][4508] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:36:25.539627 env[1315]: 2025-07-10 00:36:25.472 [INFO][4508] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:36:25.539627 env[1315]: 2025-07-10 00:36:25.481 [INFO][4508] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e50c56e443176676df2e01b1b7c5c641b65014465ce0b76b75cbd160f4349a92" host="localhost" Jul 10 00:36:25.539627 env[1315]: 2025-07-10 00:36:25.486 [INFO][4508] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:36:25.539627 env[1315]: 2025-07-10 00:36:25.493 [INFO][4508] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:36:25.539627 env[1315]: 2025-07-10 00:36:25.496 [INFO][4508] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:36:25.539627 env[1315]: 2025-07-10 00:36:25.498 [INFO][4508] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:36:25.539627 env[1315]: 2025-07-10 00:36:25.498 [INFO][4508] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e50c56e443176676df2e01b1b7c5c641b65014465ce0b76b75cbd160f4349a92" host="localhost" Jul 10 00:36:25.539627 env[1315]: 2025-07-10 00:36:25.500 [INFO][4508] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e50c56e443176676df2e01b1b7c5c641b65014465ce0b76b75cbd160f4349a92 Jul 10 00:36:25.539627 env[1315]: 2025-07-10 00:36:25.504 [INFO][4508] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e50c56e443176676df2e01b1b7c5c641b65014465ce0b76b75cbd160f4349a92" host="localhost" Jul 10 00:36:25.539627 env[1315]: 2025-07-10 00:36:25.510 [INFO][4508] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.e50c56e443176676df2e01b1b7c5c641b65014465ce0b76b75cbd160f4349a92" host="localhost" Jul 10 00:36:25.539627 env[1315]: 2025-07-10 00:36:25.510 [INFO][4508] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.e50c56e443176676df2e01b1b7c5c641b65014465ce0b76b75cbd160f4349a92" host="localhost" Jul 10 00:36:25.539627 env[1315]: 2025-07-10 00:36:25.510 [INFO][4508] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:36:25.539627 env[1315]: 2025-07-10 00:36:25.510 [INFO][4508] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="e50c56e443176676df2e01b1b7c5c641b65014465ce0b76b75cbd160f4349a92" HandleID="k8s-pod-network.e50c56e443176676df2e01b1b7c5c641b65014465ce0b76b75cbd160f4349a92" Workload="localhost-k8s-calico--kube--controllers--6946f6c79d--88gpt-eth0" Jul 10 00:36:25.540466 env[1315]: 2025-07-10 00:36:25.512 [INFO][4491] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e50c56e443176676df2e01b1b7c5c641b65014465ce0b76b75cbd160f4349a92" Namespace="calico-system" Pod="calico-kube-controllers-6946f6c79d-88gpt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6946f6c79d--88gpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6946f6c79d--88gpt-eth0", GenerateName:"calico-kube-controllers-6946f6c79d-", Namespace:"calico-system", SelfLink:"", UID:"a753c3a5-5f12-42b7-a570-848c45ac60a4", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 36, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6946f6c79d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6946f6c79d-88gpt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic772ce8e667", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:36:25.540466 env[1315]: 2025-07-10 00:36:25.513 [INFO][4491] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="e50c56e443176676df2e01b1b7c5c641b65014465ce0b76b75cbd160f4349a92" Namespace="calico-system" Pod="calico-kube-controllers-6946f6c79d-88gpt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6946f6c79d--88gpt-eth0" Jul 10 00:36:25.540466 env[1315]: 2025-07-10 00:36:25.513 [INFO][4491] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic772ce8e667 ContainerID="e50c56e443176676df2e01b1b7c5c641b65014465ce0b76b75cbd160f4349a92" Namespace="calico-system" Pod="calico-kube-controllers-6946f6c79d-88gpt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6946f6c79d--88gpt-eth0" Jul 10 00:36:25.540466 env[1315]: 2025-07-10 00:36:25.516 [INFO][4491] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e50c56e443176676df2e01b1b7c5c641b65014465ce0b76b75cbd160f4349a92" Namespace="calico-system" Pod="calico-kube-controllers-6946f6c79d-88gpt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6946f6c79d--88gpt-eth0" Jul 10 00:36:25.540466 env[1315]: 2025-07-10 00:36:25.521 [INFO][4491] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e50c56e443176676df2e01b1b7c5c641b65014465ce0b76b75cbd160f4349a92" Namespace="calico-system" Pod="calico-kube-controllers-6946f6c79d-88gpt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6946f6c79d--88gpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6946f6c79d--88gpt-eth0", GenerateName:"calico-kube-controllers-6946f6c79d-", Namespace:"calico-system", SelfLink:"", UID:"a753c3a5-5f12-42b7-a570-848c45ac60a4", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 36, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6946f6c79d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e50c56e443176676df2e01b1b7c5c641b65014465ce0b76b75cbd160f4349a92", Pod:"calico-kube-controllers-6946f6c79d-88gpt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic772ce8e667", MAC:"0a:41:d8:56:7f:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:36:25.540466 env[1315]: 2025-07-10 00:36:25.535 [INFO][4491] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e50c56e443176676df2e01b1b7c5c641b65014465ce0b76b75cbd160f4349a92" Namespace="calico-system" Pod="calico-kube-controllers-6946f6c79d-88gpt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6946f6c79d--88gpt-eth0" Jul 10 00:36:25.546000 audit[4525]: NETFILTER_CFG table=filter:118 family=2 entries=48 op=nft_register_chain pid=4525 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 00:36:25.546000 audit[4525]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23108 a0=3 a1=ffffcb3038d0 a2=0 a3=ffff91454fa8 items=0 ppid=3478 pid=4525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:25.546000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 00:36:25.552350 env[1315]: time="2025-07-10T00:36:25.552279203Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:36:25.552540 env[1315]: time="2025-07-10T00:36:25.552330202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:36:25.552540 env[1315]: time="2025-07-10T00:36:25.552340242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:36:25.552660 env[1315]: time="2025-07-10T00:36:25.552563041Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e50c56e443176676df2e01b1b7c5c641b65014465ce0b76b75cbd160f4349a92 pid=4534 runtime=io.containerd.runc.v2 Jul 10 00:36:25.599427 systemd-resolved[1236]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:36:25.623969 env[1315]: time="2025-07-10T00:36:25.622218396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6946f6c79d-88gpt,Uid:a753c3a5-5f12-42b7-a570-848c45ac60a4,Namespace:calico-system,Attempt:1,} returns sandbox id \"e50c56e443176676df2e01b1b7c5c641b65014465ce0b76b75cbd160f4349a92\"" Jul 10 00:36:25.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.85:22-10.0.0.1:54546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:25.664888 systemd[1]: Started sshd@8-10.0.0.85:22-10.0.0.1:54546.service. Jul 10 00:36:25.712000 audit[4568]: USER_ACCT pid=4568 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:25.713093 sshd[4568]: Accepted publickey for core from 10.0.0.1 port 54546 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:36:25.713000 audit[4568]: CRED_ACQ pid=4568 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:25.713000 audit[4568]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffc32ab30 a2=3 a3=1 items=0 ppid=1 pid=4568 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:25.713000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:36:25.714801 sshd[4568]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:25.720241 systemd-logind[1302]: New session 9 of user core. Jul 10 00:36:25.720828 systemd[1]: Started session-9.scope. Jul 10 00:36:25.730000 audit[4568]: USER_START pid=4568 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:25.731000 audit[4571]: CRED_ACQ pid=4571 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:25.763618 systemd-networkd[1098]: cali9d5b822b871: Gained IPv6LL Jul 10 00:36:25.827627 systemd-networkd[1098]: calife3656df14d: Gained IPv6LL Jul 10 00:36:25.913209 sshd[4568]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:25.913000 audit[4568]: USER_END pid=4568 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:25.913000 audit[4568]: CRED_DISP pid=4568 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:25.916412 systemd[1]: sshd@8-10.0.0.85:22-10.0.0.1:54546.service: Deactivated successfully. Jul 10 00:36:25.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.85:22-10.0.0.1:54546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:25.917469 systemd-logind[1302]: Session 9 logged out. Waiting for processes to exit. Jul 10 00:36:25.917500 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 00:36:25.918477 systemd-logind[1302]: Removed session 9. Jul 10 00:36:26.100451 env[1315]: time="2025-07-10T00:36:26.100393039Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:26.101759 env[1315]: time="2025-07-10T00:36:26.101726070Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:26.105804 env[1315]: time="2025-07-10T00:36:26.105763885Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:26.107032 env[1315]: time="2025-07-10T00:36:26.106996198Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:26.107627 env[1315]: time="2025-07-10T00:36:26.107588194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 10 00:36:26.109347 env[1315]: time="2025-07-10T00:36:26.109303183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 00:36:26.110003 env[1315]: time="2025-07-10T00:36:26.109968299Z" level=info msg="CreateContainer within sandbox \"6d23b2e04a267d411b87a27ff984e2044b97fcdabb4994cb58b7f26ba1f33fad\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 10 00:36:26.128421 env[1315]: time="2025-07-10T00:36:26.128375184Z" level=info msg="CreateContainer within sandbox \"6d23b2e04a267d411b87a27ff984e2044b97fcdabb4994cb58b7f26ba1f33fad\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"6ecd0dbe1e06cf1968c6a482ed9d97a10299edfffeeb496a620de2c766ddccd6\"" Jul 10 00:36:26.130129 env[1315]: time="2025-07-10T00:36:26.129037060Z" level=info msg="StartContainer for \"6ecd0dbe1e06cf1968c6a482ed9d97a10299edfffeeb496a620de2c766ddccd6\"" Jul 10 00:36:26.203832 env[1315]: time="2025-07-10T00:36:26.203785955Z" level=info msg="StopPodSandbox for \"182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0\"" Jul 10 00:36:26.215474 env[1315]: time="2025-07-10T00:36:26.214381649Z" level=info msg="StartContainer for \"6ecd0dbe1e06cf1968c6a482ed9d97a10299edfffeeb496a620de2c766ddccd6\" returns successfully" Jul 10 00:36:26.302419 env[1315]: 2025-07-10 00:36:26.255 [INFO][4628] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" Jul 10 00:36:26.302419 env[1315]: 2025-07-10 00:36:26.255 [INFO][4628] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" iface="eth0" netns="/var/run/netns/cni-05f129cb-d907-08cb-e6e1-6e87acb642c2" Jul 10 00:36:26.302419 env[1315]: 2025-07-10 00:36:26.255 [INFO][4628] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" iface="eth0" netns="/var/run/netns/cni-05f129cb-d907-08cb-e6e1-6e87acb642c2" Jul 10 00:36:26.302419 env[1315]: 2025-07-10 00:36:26.255 [INFO][4628] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" iface="eth0" netns="/var/run/netns/cni-05f129cb-d907-08cb-e6e1-6e87acb642c2" Jul 10 00:36:26.302419 env[1315]: 2025-07-10 00:36:26.255 [INFO][4628] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" Jul 10 00:36:26.302419 env[1315]: 2025-07-10 00:36:26.255 [INFO][4628] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" Jul 10 00:36:26.302419 env[1315]: 2025-07-10 00:36:26.288 [INFO][4642] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" HandleID="k8s-pod-network.182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" Workload="localhost-k8s-calico--apiserver--866977c98d--2dcmd-eth0" Jul 10 00:36:26.302419 env[1315]: 2025-07-10 00:36:26.288 [INFO][4642] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:36:26.302419 env[1315]: 2025-07-10 00:36:26.288 [INFO][4642] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:36:26.302419 env[1315]: 2025-07-10 00:36:26.297 [WARNING][4642] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" HandleID="k8s-pod-network.182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" Workload="localhost-k8s-calico--apiserver--866977c98d--2dcmd-eth0" Jul 10 00:36:26.302419 env[1315]: 2025-07-10 00:36:26.297 [INFO][4642] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" HandleID="k8s-pod-network.182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" Workload="localhost-k8s-calico--apiserver--866977c98d--2dcmd-eth0" Jul 10 00:36:26.302419 env[1315]: 2025-07-10 00:36:26.298 [INFO][4642] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:36:26.302419 env[1315]: 2025-07-10 00:36:26.300 [INFO][4628] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" Jul 10 00:36:26.303018 env[1315]: time="2025-07-10T00:36:26.302984137Z" level=info msg="TearDown network for sandbox \"182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0\" successfully" Jul 10 00:36:26.303105 env[1315]: time="2025-07-10T00:36:26.303087496Z" level=info msg="StopPodSandbox for \"182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0\" returns successfully" Jul 10 00:36:26.303844 env[1315]: time="2025-07-10T00:36:26.303805572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866977c98d-2dcmd,Uid:96a10c6f-bf44-4fd2-abf3-72068d8168d1,Namespace:calico-apiserver,Attempt:1,}" Jul 10 00:36:26.334719 kubelet[2094]: E0710 00:36:26.333645 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:26.334719 kubelet[2094]: E0710 00:36:26.334639 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:26.347897 kubelet[2094]: I0710 00:36:26.347837 2094 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-ltbnn" podStartSLOduration=21.787839029 podStartE2EDuration="24.347787578s" podCreationTimestamp="2025-07-10 00:36:02 +0000 UTC" firstStartedPulling="2025-07-10 00:36:23.548574919 +0000 UTC m=+41.445153980" lastFinishedPulling="2025-07-10 00:36:26.108523468 +0000 UTC m=+44.005102529" observedRunningTime="2025-07-10 00:36:26.3474995 +0000 UTC m=+44.244078601" watchObservedRunningTime="2025-07-10 00:36:26.347787578 +0000 UTC m=+44.244366639" Jul 10 00:36:26.364000 audit[4669]: NETFILTER_CFG table=filter:119 family=2 entries=12 op=nft_register_rule pid=4669 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:26.364000 audit[4669]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffd24c7f80 a2=0 a3=1 items=0 ppid=2205 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:26.364000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:26.375000 audit[4669]: NETFILTER_CFG table=nat:120 family=2 entries=58 op=nft_register_chain pid=4669 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:26.375000 audit[4669]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20628 a0=3 a1=ffffd24c7f80 a2=0 a3=1 items=0 ppid=2205 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:26.375000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:26.448965 systemd-networkd[1098]: cali08ba20e1536: Link UP Jul 10 00:36:26.450855 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 10 00:36:26.451181 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali08ba20e1536: link becomes ready Jul 10 00:36:26.451013 systemd-networkd[1098]: cali08ba20e1536: Gained carrier Jul 10 00:36:26.474543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1063631686.mount: Deactivated successfully. Jul 10 00:36:26.474683 systemd[1]: run-netns-cni\x2d05f129cb\x2dd907\x2d08cb\x2de6e1\x2d6e87acb642c2.mount: Deactivated successfully. Jul 10 00:36:26.477765 env[1315]: 2025-07-10 00:36:26.357 [INFO][4652] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--866977c98d--2dcmd-eth0 calico-apiserver-866977c98d- calico-apiserver 96a10c6f-bf44-4fd2-abf3-72068d8168d1 1099 0 2025-07-10 00:35:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:866977c98d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-866977c98d-2dcmd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali08ba20e1536 [] [] }} ContainerID="8ec0235c8d70956911057f9ad003f6e0f262c90d494f37a83d86140aacdc4611" Namespace="calico-apiserver" Pod="calico-apiserver-866977c98d-2dcmd" WorkloadEndpoint="localhost-k8s-calico--apiserver--866977c98d--2dcmd-" Jul 10 00:36:26.477765 env[1315]: 2025-07-10 00:36:26.357 [INFO][4652] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8ec0235c8d70956911057f9ad003f6e0f262c90d494f37a83d86140aacdc4611" Namespace="calico-apiserver" Pod="calico-apiserver-866977c98d-2dcmd" WorkloadEndpoint="localhost-k8s-calico--apiserver--866977c98d--2dcmd-eth0" Jul 10 00:36:26.477765 env[1315]: 2025-07-10 00:36:26.386 [INFO][4668] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ec0235c8d70956911057f9ad003f6e0f262c90d494f37a83d86140aacdc4611" HandleID="k8s-pod-network.8ec0235c8d70956911057f9ad003f6e0f262c90d494f37a83d86140aacdc4611" Workload="localhost-k8s-calico--apiserver--866977c98d--2dcmd-eth0" Jul 10 00:36:26.477765 env[1315]: 2025-07-10 00:36:26.386 [INFO][4668] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8ec0235c8d70956911057f9ad003f6e0f262c90d494f37a83d86140aacdc4611" HandleID="k8s-pod-network.8ec0235c8d70956911057f9ad003f6e0f262c90d494f37a83d86140aacdc4611" Workload="localhost-k8s-calico--apiserver--866977c98d--2dcmd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3970), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-866977c98d-2dcmd", "timestamp":"2025-07-10 00:36:26.386620896 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:36:26.477765 env[1315]: 2025-07-10 00:36:26.386 [INFO][4668] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:36:26.477765 env[1315]: 2025-07-10 00:36:26.386 [INFO][4668] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:36:26.477765 env[1315]: 2025-07-10 00:36:26.387 [INFO][4668] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:36:26.477765 env[1315]: 2025-07-10 00:36:26.395 [INFO][4668] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8ec0235c8d70956911057f9ad003f6e0f262c90d494f37a83d86140aacdc4611" host="localhost" Jul 10 00:36:26.477765 env[1315]: 2025-07-10 00:36:26.403 [INFO][4668] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:36:26.477765 env[1315]: 2025-07-10 00:36:26.415 [INFO][4668] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:36:26.477765 env[1315]: 2025-07-10 00:36:26.421 [INFO][4668] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:36:26.477765 env[1315]: 2025-07-10 00:36:26.423 [INFO][4668] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:36:26.477765 env[1315]: 2025-07-10 00:36:26.423 [INFO][4668] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8ec0235c8d70956911057f9ad003f6e0f262c90d494f37a83d86140aacdc4611" host="localhost" Jul 10 00:36:26.477765 env[1315]: 2025-07-10 00:36:26.425 [INFO][4668] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8ec0235c8d70956911057f9ad003f6e0f262c90d494f37a83d86140aacdc4611 Jul 10 00:36:26.477765 env[1315]: 2025-07-10 00:36:26.430 [INFO][4668] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8ec0235c8d70956911057f9ad003f6e0f262c90d494f37a83d86140aacdc4611" host="localhost" Jul 10 00:36:26.477765 env[1315]: 2025-07-10 00:36:26.442 [INFO][4668] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.8ec0235c8d70956911057f9ad003f6e0f262c90d494f37a83d86140aacdc4611" host="localhost" Jul 10 00:36:26.477765 env[1315]: 2025-07-10 00:36:26.442 [INFO][4668] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.8ec0235c8d70956911057f9ad003f6e0f262c90d494f37a83d86140aacdc4611" host="localhost" Jul 10 00:36:26.477765 env[1315]: 2025-07-10 00:36:26.442 [INFO][4668] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:36:26.477765 env[1315]: 2025-07-10 00:36:26.442 [INFO][4668] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="8ec0235c8d70956911057f9ad003f6e0f262c90d494f37a83d86140aacdc4611" HandleID="k8s-pod-network.8ec0235c8d70956911057f9ad003f6e0f262c90d494f37a83d86140aacdc4611" Workload="localhost-k8s-calico--apiserver--866977c98d--2dcmd-eth0" Jul 10 00:36:26.478540 env[1315]: 2025-07-10 00:36:26.444 [INFO][4652] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8ec0235c8d70956911057f9ad003f6e0f262c90d494f37a83d86140aacdc4611" Namespace="calico-apiserver" Pod="calico-apiserver-866977c98d-2dcmd" WorkloadEndpoint="localhost-k8s-calico--apiserver--866977c98d--2dcmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--866977c98d--2dcmd-eth0", GenerateName:"calico-apiserver-866977c98d-", Namespace:"calico-apiserver", SelfLink:"", UID:"96a10c6f-bf44-4fd2-abf3-72068d8168d1", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 35, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"866977c98d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-866977c98d-2dcmd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali08ba20e1536", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:36:26.478540 env[1315]: 2025-07-10 00:36:26.444 [INFO][4652] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="8ec0235c8d70956911057f9ad003f6e0f262c90d494f37a83d86140aacdc4611" Namespace="calico-apiserver" Pod="calico-apiserver-866977c98d-2dcmd" WorkloadEndpoint="localhost-k8s-calico--apiserver--866977c98d--2dcmd-eth0" Jul 10 00:36:26.478540 env[1315]: 2025-07-10 00:36:26.444 [INFO][4652] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali08ba20e1536 ContainerID="8ec0235c8d70956911057f9ad003f6e0f262c90d494f37a83d86140aacdc4611" Namespace="calico-apiserver" Pod="calico-apiserver-866977c98d-2dcmd" WorkloadEndpoint="localhost-k8s-calico--apiserver--866977c98d--2dcmd-eth0" Jul 10 00:36:26.478540 env[1315]: 2025-07-10 00:36:26.451 [INFO][4652] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ec0235c8d70956911057f9ad003f6e0f262c90d494f37a83d86140aacdc4611" Namespace="calico-apiserver" Pod="calico-apiserver-866977c98d-2dcmd" WorkloadEndpoint="localhost-k8s-calico--apiserver--866977c98d--2dcmd-eth0" Jul 10 00:36:26.478540 env[1315]: 2025-07-10 00:36:26.453 [INFO][4652] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8ec0235c8d70956911057f9ad003f6e0f262c90d494f37a83d86140aacdc4611" Namespace="calico-apiserver" Pod="calico-apiserver-866977c98d-2dcmd" WorkloadEndpoint="localhost-k8s-calico--apiserver--866977c98d--2dcmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--866977c98d--2dcmd-eth0", GenerateName:"calico-apiserver-866977c98d-", Namespace:"calico-apiserver", SelfLink:"", UID:"96a10c6f-bf44-4fd2-abf3-72068d8168d1", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 35, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"866977c98d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ec0235c8d70956911057f9ad003f6e0f262c90d494f37a83d86140aacdc4611", Pod:"calico-apiserver-866977c98d-2dcmd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali08ba20e1536", MAC:"86:f1:4d:19:44:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:36:26.478540 env[1315]: 2025-07-10 00:36:26.473 [INFO][4652] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8ec0235c8d70956911057f9ad003f6e0f262c90d494f37a83d86140aacdc4611" Namespace="calico-apiserver" Pod="calico-apiserver-866977c98d-2dcmd" WorkloadEndpoint="localhost-k8s-calico--apiserver--866977c98d--2dcmd-eth0" Jul 10 00:36:26.489612 env[1315]: time="2025-07-10T00:36:26.489470215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:36:26.489612 env[1315]: time="2025-07-10T00:36:26.489516775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:36:26.489612 env[1315]: time="2025-07-10T00:36:26.489528335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:36:26.489000 audit[4694]: NETFILTER_CFG table=filter:121 family=2 entries=53 op=nft_register_chain pid=4694 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 00:36:26.490075 env[1315]: time="2025-07-10T00:36:26.489959772Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8ec0235c8d70956911057f9ad003f6e0f262c90d494f37a83d86140aacdc4611 pid=4692 runtime=io.containerd.runc.v2 Jul 10 00:36:26.489000 audit[4694]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26608 a0=3 a1=ffffed32e2d0 a2=0 a3=ffffa5994fa8 items=0 ppid=3478 pid=4694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:26.489000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 00:36:26.524388 systemd-resolved[1236]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:36:26.539587 env[1315]: time="2025-07-10T00:36:26.539540064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866977c98d-2dcmd,Uid:96a10c6f-bf44-4fd2-abf3-72068d8168d1,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8ec0235c8d70956911057f9ad003f6e0f262c90d494f37a83d86140aacdc4611\"" Jul 10 00:36:26.659571 systemd-networkd[1098]: cali1e5b0a0a761: Gained IPv6LL Jul 10 00:36:27.299609 systemd-networkd[1098]: calic772ce8e667: Gained IPv6LL Jul 10 00:36:27.335973 kubelet[2094]: I0710 00:36:27.335932 2094 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:36:28.451554 systemd-networkd[1098]: cali08ba20e1536: Gained IPv6LL Jul 10 00:36:29.089447 env[1315]: time="2025-07-10T00:36:29.089382450Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:29.091034 env[1315]: time="2025-07-10T00:36:29.090981401Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:29.093069 env[1315]: time="2025-07-10T00:36:29.093039749Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:29.094251 env[1315]: time="2025-07-10T00:36:29.094222182Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:29.094663 env[1315]: time="2025-07-10T00:36:29.094635140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 10 00:36:29.096062 env[1315]: time="2025-07-10T00:36:29.095757453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 10 00:36:29.098406 env[1315]: time="2025-07-10T00:36:29.098366718Z" level=info msg="CreateContainer within sandbox \"044aa093e9e11461272206a37bd50161eb2616b8d97f2bee78c4a99ec4c0c216\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 00:36:29.107642 env[1315]: time="2025-07-10T00:36:29.107562905Z" level=info msg="CreateContainer within sandbox \"044aa093e9e11461272206a37bd50161eb2616b8d97f2bee78c4a99ec4c0c216\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c070260b01f154b6f946e2ab73dfabcae85b6083af2535b9d4e107fb19833057\"" Jul 10 00:36:29.109789 env[1315]: time="2025-07-10T00:36:29.108751658Z" level=info msg="StartContainer for \"c070260b01f154b6f946e2ab73dfabcae85b6083af2535b9d4e107fb19833057\"" Jul 10 00:36:29.169378 env[1315]: time="2025-07-10T00:36:29.169333585Z" level=info msg="StartContainer for \"c070260b01f154b6f946e2ab73dfabcae85b6083af2535b9d4e107fb19833057\" returns successfully" Jul 10 00:36:29.370000 audit[4766]: NETFILTER_CFG table=filter:122 family=2 entries=12 op=nft_register_rule pid=4766 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:29.371473 kernel: kauditd_printk_skb: 40 callbacks suppressed Jul 10 00:36:29.371549 kernel: audit: type=1325 audit(1752107789.370:435): table=filter:122 family=2 entries=12 op=nft_register_rule pid=4766 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:29.370000 audit[4766]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffe8e15220 a2=0 a3=1 items=0 ppid=2205 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:29.377053 kernel: audit: type=1300 audit(1752107789.370:435): arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffe8e15220 a2=0 a3=1 items=0 ppid=2205 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:29.370000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:29.379179 kernel: audit: type=1327 audit(1752107789.370:435): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:29.379000 audit[4766]: NETFILTER_CFG table=nat:123 family=2 entries=22 op=nft_register_rule pid=4766 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:29.379000 audit[4766]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffe8e15220 a2=0 a3=1 items=0 ppid=2205 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:29.386028 kernel: audit: type=1325 audit(1752107789.379:436): table=nat:123 family=2 entries=22 op=nft_register_rule pid=4766 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:29.386093 kernel: audit: type=1300 audit(1752107789.379:436): arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffe8e15220 a2=0 a3=1 items=0 ppid=2205 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:29.379000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:29.388381 kernel: audit: type=1327 audit(1752107789.379:436): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:30.344089 kubelet[2094]: I0710 00:36:30.343635 2094 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:36:30.519768 env[1315]: time="2025-07-10T00:36:30.519722836Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:30.520867 env[1315]: time="2025-07-10T00:36:30.520837150Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:30.523265 env[1315]: time="2025-07-10T00:36:30.522933818Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:30.524591 env[1315]: time="2025-07-10T00:36:30.524517449Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:30.524792 env[1315]: time="2025-07-10T00:36:30.524759407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 10 00:36:30.526788 env[1315]: time="2025-07-10T00:36:30.526752196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 10 00:36:30.527572 env[1315]: time="2025-07-10T00:36:30.527538631Z" level=info msg="CreateContainer within sandbox \"3f49ede495569ac49de4d859606ab87018c37d1229ccf1639377e9b6f81f1c1e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 10 00:36:30.539291 env[1315]: time="2025-07-10T00:36:30.539242805Z" level=info msg="CreateContainer within sandbox \"3f49ede495569ac49de4d859606ab87018c37d1229ccf1639377e9b6f81f1c1e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a4749da9308500d008f6b1a3de391d4c7b0cf8b9d815a2ae577500631131fc46\"" Jul 10 00:36:30.540317 env[1315]: time="2025-07-10T00:36:30.540281279Z" level=info msg="StartContainer for \"a4749da9308500d008f6b1a3de391d4c7b0cf8b9d815a2ae577500631131fc46\"" Jul 10 00:36:30.611414 env[1315]: time="2025-07-10T00:36:30.611300754Z" level=info msg="StartContainer for \"a4749da9308500d008f6b1a3de391d4c7b0cf8b9d815a2ae577500631131fc46\" returns successfully" Jul 10 00:36:30.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.85:22-10.0.0.1:54556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:30.917221 systemd[1]: Started sshd@9-10.0.0.85:22-10.0.0.1:54556.service. Jul 10 00:36:30.921471 kernel: audit: type=1130 audit(1752107790.916:437): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.85:22-10.0.0.1:54556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:30.963000 audit[4808]: USER_ACCT pid=4808 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:30.964659 sshd[4808]: Accepted publickey for core from 10.0.0.1 port 54556 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:36:30.966452 sshd[4808]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:30.965000 audit[4808]: CRED_ACQ pid=4808 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:30.972243 kernel: audit: type=1101 audit(1752107790.963:438): pid=4808 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:30.972330 kernel: audit: type=1103 audit(1752107790.965:439): pid=4808 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:30.971875 systemd[1]: Started session-10.scope. Jul 10 00:36:30.972102 systemd-logind[1302]: New session 10 of user core. Jul 10 00:36:30.975529 kernel: audit: type=1006 audit(1752107790.965:440): pid=4808 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jul 10 00:36:30.965000 audit[4808]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc72d43d0 a2=3 a3=1 items=0 ppid=1 pid=4808 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:30.965000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:36:30.976000 audit[4808]: USER_START pid=4808 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:30.978000 audit[4811]: CRED_ACQ pid=4811 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:31.231603 sshd[4808]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:31.232257 systemd[1]: Started sshd@10-10.0.0.85:22-10.0.0.1:54572.service. Jul 10 00:36:31.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.85:22-10.0.0.1:54572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:31.232000 audit[4808]: USER_END pid=4808 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:31.232000 audit[4808]: CRED_DISP pid=4808 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:31.235042 systemd[1]: sshd@9-10.0.0.85:22-10.0.0.1:54556.service: Deactivated successfully. Jul 10 00:36:31.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.85:22-10.0.0.1:54556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:31.236331 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 00:36:31.237425 systemd-logind[1302]: Session 10 logged out. Waiting for processes to exit. Jul 10 00:36:31.240844 systemd-logind[1302]: Removed session 10. Jul 10 00:36:31.279000 audit[4824]: USER_ACCT pid=4824 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:31.279853 sshd[4824]: Accepted publickey for core from 10.0.0.1 port 54572 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:36:31.280000 audit[4824]: CRED_ACQ pid=4824 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:31.280000 audit[4824]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff0e564a0 a2=3 a3=1 items=0 ppid=1 pid=4824 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:31.280000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:36:31.281192 sshd[4824]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:31.284711 systemd-logind[1302]: New session 11 of user core. Jul 10 00:36:31.285571 systemd[1]: Started session-11.scope. Jul 10 00:36:31.291000 audit[4824]: USER_START pid=4824 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:31.294000 audit[4829]: CRED_ACQ pid=4829 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:31.479211 sshd[4824]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:31.482025 systemd[1]: Started sshd@11-10.0.0.85:22-10.0.0.1:54574.service. Jul 10 00:36:31.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.85:22-10.0.0.1:54574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:31.486000 audit[4824]: USER_END pid=4824 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:31.486000 audit[4824]: CRED_DISP pid=4824 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:31.491310 systemd[1]: sshd@10-10.0.0.85:22-10.0.0.1:54572.service: Deactivated successfully. Jul 10 00:36:31.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.85:22-10.0.0.1:54572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:31.493750 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 00:36:31.494053 systemd-logind[1302]: Session 11 logged out. Waiting for processes to exit. Jul 10 00:36:31.495617 systemd-logind[1302]: Removed session 11. Jul 10 00:36:31.530000 audit[4836]: USER_ACCT pid=4836 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:31.531167 sshd[4836]: Accepted publickey for core from 10.0.0.1 port 54574 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:36:31.531000 audit[4836]: CRED_ACQ pid=4836 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:31.531000 audit[4836]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd2f81b60 a2=3 a3=1 items=0 ppid=1 pid=4836 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:31.531000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:36:31.532924 sshd[4836]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:31.538232 systemd[1]: run-containerd-runc-k8s.io-a4749da9308500d008f6b1a3de391d4c7b0cf8b9d815a2ae577500631131fc46-runc.vPpdRf.mount: Deactivated successfully. Jul 10 00:36:31.541110 systemd-logind[1302]: New session 12 of user core. Jul 10 00:36:31.542030 systemd[1]: Started session-12.scope. Jul 10 00:36:31.552000 audit[4836]: USER_START pid=4836 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:31.556000 audit[4843]: CRED_ACQ pid=4843 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:31.731051 sshd[4836]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:31.731000 audit[4836]: USER_END pid=4836 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:31.732000 audit[4836]: CRED_DISP pid=4836 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:31.734102 systemd[1]: sshd@11-10.0.0.85:22-10.0.0.1:54574.service: Deactivated successfully. Jul 10 00:36:31.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.85:22-10.0.0.1:54574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:31.735177 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 00:36:31.735198 systemd-logind[1302]: Session 12 logged out. Waiting for processes to exit. Jul 10 00:36:31.736286 systemd-logind[1302]: Removed session 12. Jul 10 00:36:32.966964 env[1315]: time="2025-07-10T00:36:32.966903227Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:32.968749 env[1315]: time="2025-07-10T00:36:32.968717377Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:32.970492 env[1315]: time="2025-07-10T00:36:32.970456768Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:32.971992 env[1315]: time="2025-07-10T00:36:32.971960240Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:32.972464 env[1315]: time="2025-07-10T00:36:32.972425757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 10 00:36:32.974865 env[1315]: time="2025-07-10T00:36:32.974560585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 00:36:32.991944 env[1315]: time="2025-07-10T00:36:32.991895331Z" level=info msg="CreateContainer within sandbox \"e50c56e443176676df2e01b1b7c5c641b65014465ce0b76b75cbd160f4349a92\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 10 00:36:33.004866 env[1315]: time="2025-07-10T00:36:33.004819420Z" level=info msg="CreateContainer within sandbox \"e50c56e443176676df2e01b1b7c5c641b65014465ce0b76b75cbd160f4349a92\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9c061fa6572a918d635ee2b2109a2aed8cb394b2401cb494053c85e658de999f\"" Jul 10 00:36:33.005572 env[1315]: time="2025-07-10T00:36:33.005474617Z" level=info msg="StartContainer for \"9c061fa6572a918d635ee2b2109a2aed8cb394b2401cb494053c85e658de999f\"" Jul 10 00:36:33.064413 env[1315]: time="2025-07-10T00:36:33.064367180Z" level=info msg="StartContainer for \"9c061fa6572a918d635ee2b2109a2aed8cb394b2401cb494053c85e658de999f\" returns successfully" Jul 10 00:36:33.092699 kubelet[2094]: I0710 00:36:33.092661 2094 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:36:33.112905 kubelet[2094]: I0710 00:36:33.112830 2094 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-866977c98d-bvxbn" podStartSLOduration=32.732341781 podStartE2EDuration="37.11281644s" podCreationTimestamp="2025-07-10 00:35:56 +0000 UTC" firstStartedPulling="2025-07-10 00:36:24.715128795 +0000 UTC m=+42.611707856" lastFinishedPulling="2025-07-10 00:36:29.095603454 +0000 UTC m=+46.992182515" observedRunningTime="2025-07-10 00:36:29.358247727 +0000 UTC m=+47.254826788" watchObservedRunningTime="2025-07-10 00:36:33.11281644 +0000 UTC m=+51.009395501" Jul 10 00:36:33.134000 audit[4902]: NETFILTER_CFG table=filter:124 family=2 entries=11 op=nft_register_rule pid=4902 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:33.134000 audit[4902]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffd7a522a0 a2=0 a3=1 items=0 ppid=2205 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:33.134000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:33.143000 audit[4902]: NETFILTER_CFG table=nat:125 family=2 entries=29 op=nft_register_chain pid=4902 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:33.143000 audit[4902]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10116 a0=3 a1=ffffd7a522a0 a2=0 a3=1 items=0 ppid=2205 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:33.143000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:33.221121 env[1315]: time="2025-07-10T00:36:33.220152063Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:33.222782 env[1315]: time="2025-07-10T00:36:33.222729769Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:33.225003 env[1315]: time="2025-07-10T00:36:33.224293841Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:33.226734 env[1315]: time="2025-07-10T00:36:33.226702828Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:33.227244 env[1315]: time="2025-07-10T00:36:33.227214265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 10 00:36:33.230117 env[1315]: time="2025-07-10T00:36:33.229221694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 10 00:36:33.231191 env[1315]: time="2025-07-10T00:36:33.231156084Z" level=info msg="CreateContainer within sandbox \"8ec0235c8d70956911057f9ad003f6e0f262c90d494f37a83d86140aacdc4611\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 00:36:33.242087 env[1315]: time="2025-07-10T00:36:33.242041225Z" level=info msg="CreateContainer within sandbox \"8ec0235c8d70956911057f9ad003f6e0f262c90d494f37a83d86140aacdc4611\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"80287fcc4b4d607b39d8f428aedf709c83ba625dab4c4ba3f4bc4b2e16050885\"" Jul 10 00:36:33.242981 env[1315]: time="2025-07-10T00:36:33.242909021Z" level=info msg="StartContainer for \"80287fcc4b4d607b39d8f428aedf709c83ba625dab4c4ba3f4bc4b2e16050885\"" Jul 10 00:36:33.328635 env[1315]: time="2025-07-10T00:36:33.328580200Z" level=info msg="StartContainer for \"80287fcc4b4d607b39d8f428aedf709c83ba625dab4c4ba3f4bc4b2e16050885\" returns successfully" Jul 10 00:36:33.370150 kubelet[2094]: I0710 00:36:33.365681 2094 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-866977c98d-2dcmd" podStartSLOduration=30.678740994 podStartE2EDuration="37.365664601s" podCreationTimestamp="2025-07-10 00:35:56 +0000 UTC" firstStartedPulling="2025-07-10 00:36:26.541363852 +0000 UTC m=+44.437942913" lastFinishedPulling="2025-07-10 00:36:33.228287459 +0000 UTC m=+51.124866520" observedRunningTime="2025-07-10 00:36:33.364188449 +0000 UTC m=+51.260767510" watchObservedRunningTime="2025-07-10 00:36:33.365664601 +0000 UTC m=+51.262243622" Jul 10 00:36:33.380000 audit[4952]: NETFILTER_CFG table=filter:126 family=2 entries=10 op=nft_register_rule pid=4952 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:33.380000 audit[4952]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffd377c2c0 a2=0 a3=1 items=0 ppid=2205 pid=4952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:33.380000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:33.409000 audit[4952]: NETFILTER_CFG table=nat:127 family=2 entries=32 op=nft_register_rule pid=4952 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:33.421173 kubelet[2094]: I0710 00:36:33.420953 2094 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6946f6c79d-88gpt" podStartSLOduration=24.07101414 podStartE2EDuration="31.420928304s" podCreationTimestamp="2025-07-10 00:36:02 +0000 UTC" firstStartedPulling="2025-07-10 00:36:25.623703307 +0000 UTC m=+43.520282368" lastFinishedPulling="2025-07-10 00:36:32.973617471 +0000 UTC m=+50.870196532" observedRunningTime="2025-07-10 00:36:33.417678401 +0000 UTC m=+51.314257462" watchObservedRunningTime="2025-07-10 00:36:33.420928304 +0000 UTC m=+51.317507365" Jul 10 00:36:33.409000 audit[4952]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10116 a0=3 a1=ffffd377c2c0 a2=0 a3=1 items=0 ppid=2205 pid=4952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:33.409000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:34.356362 kubelet[2094]: I0710 00:36:34.356303 2094 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:36:34.854397 env[1315]: time="2025-07-10T00:36:34.854347519Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:34.858629 env[1315]: time="2025-07-10T00:36:34.858591377Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:34.860824 env[1315]: time="2025-07-10T00:36:34.860781926Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:34.862252 env[1315]: time="2025-07-10T00:36:34.862223998Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:36:34.862721 env[1315]: time="2025-07-10T00:36:34.862690635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 10 00:36:34.865236 env[1315]: time="2025-07-10T00:36:34.865202622Z" level=info msg="CreateContainer within sandbox \"3f49ede495569ac49de4d859606ab87018c37d1229ccf1639377e9b6f81f1c1e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 10 00:36:34.878812 env[1315]: time="2025-07-10T00:36:34.878767911Z" level=info msg="CreateContainer within sandbox \"3f49ede495569ac49de4d859606ab87018c37d1229ccf1639377e9b6f81f1c1e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"66f6de025a12fb77933231628743aed6093d27448da7915e4eb75c1c6c91bf9a\"" Jul 10 00:36:34.879694 env[1315]: time="2025-07-10T00:36:34.879663346Z" level=info msg="StartContainer for \"66f6de025a12fb77933231628743aed6093d27448da7915e4eb75c1c6c91bf9a\"" Jul 10 00:36:34.933587 env[1315]: time="2025-07-10T00:36:34.933540981Z" level=info msg="StartContainer for \"66f6de025a12fb77933231628743aed6093d27448da7915e4eb75c1c6c91bf9a\" returns successfully" Jul 10 00:36:34.987959 systemd[1]: run-containerd-runc-k8s.io-66f6de025a12fb77933231628743aed6093d27448da7915e4eb75c1c6c91bf9a-runc.f1621Y.mount: Deactivated successfully. Jul 10 00:36:35.281577 kubelet[2094]: I0710 00:36:35.281531 2094 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 10 00:36:35.283106 kubelet[2094]: I0710 00:36:35.283078 2094 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 10 00:36:35.404382 kubelet[2094]: I0710 00:36:35.404339 2094 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:36:35.430347 kubelet[2094]: I0710 00:36:35.430253 2094 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-fxvwp" podStartSLOduration=23.495303722 podStartE2EDuration="33.430239596s" podCreationTimestamp="2025-07-10 00:36:02 +0000 UTC" firstStartedPulling="2025-07-10 00:36:24.928713596 +0000 UTC m=+42.825292657" lastFinishedPulling="2025-07-10 00:36:34.86364947 +0000 UTC m=+52.760228531" observedRunningTime="2025-07-10 00:36:35.371912059 +0000 UTC m=+53.268491120" watchObservedRunningTime="2025-07-10 00:36:35.430239596 +0000 UTC m=+53.326818657" Jul 10 00:36:35.456232 kernel: kauditd_printk_skb: 41 callbacks suppressed Jul 10 00:36:35.456374 kernel: audit: type=1325 audit(1752107795.454:468): table=filter:128 family=2 entries=10 op=nft_register_rule pid=5007 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:35.454000 audit[5007]: NETFILTER_CFG table=filter:128 family=2 entries=10 op=nft_register_rule pid=5007 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:35.454000 audit[5007]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffd1ef1b60 a2=0 a3=1 items=0 ppid=2205 pid=5007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:35.461966 kernel: audit: type=1300 audit(1752107795.454:468): arch=c00000b7 syscall=211 success=yes exit=3760 a0=3 a1=ffffd1ef1b60 a2=0 a3=1 items=0 ppid=2205 pid=5007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:35.462029 kernel: audit: type=1327 audit(1752107795.454:468): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:35.454000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:35.469000 audit[5007]: NETFILTER_CFG table=nat:129 family=2 entries=36 op=nft_register_chain pid=5007 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:35.469000 audit[5007]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=12004 a0=3 a1=ffffd1ef1b60 a2=0 a3=1 items=0 ppid=2205 pid=5007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:35.476331 kernel: audit: type=1325 audit(1752107795.469:469): table=nat:129 family=2 entries=36 op=nft_register_chain pid=5007 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:35.476389 kernel: audit: type=1300 audit(1752107795.469:469): arch=c00000b7 syscall=211 success=yes exit=12004 a0=3 a1=ffffd1ef1b60 a2=0 a3=1 items=0 ppid=2205 pid=5007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:35.476419 kernel: audit: type=1327 audit(1752107795.469:469): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:35.469000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:36.378457 kubelet[2094]: I0710 00:36:36.378404 2094 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:36:36.579000 audit[5049]: NETFILTER_CFG table=filter:130 family=2 entries=9 op=nft_register_rule pid=5049 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:36.579000 audit[5049]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffe4979910 a2=0 a3=1 items=0 ppid=2205 pid=5049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:36.587242 kernel: audit: type=1325 audit(1752107796.579:470): table=filter:130 family=2 entries=9 op=nft_register_rule pid=5049 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:36.587343 kernel: audit: type=1300 audit(1752107796.579:470): arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffe4979910 a2=0 a3=1 items=0 ppid=2205 pid=5049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:36.587368 kernel: audit: type=1327 audit(1752107796.579:470): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:36.579000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:36.591000 audit[5049]: NETFILTER_CFG table=nat:131 family=2 entries=31 op=nft_register_chain pid=5049 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:36.591000 audit[5049]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10884 a0=3 a1=ffffe4979910 a2=0 a3=1 items=0 ppid=2205 pid=5049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:36.591000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:36.595449 kernel: audit: type=1325 audit(1752107796.591:471): table=nat:131 family=2 entries=31 op=nft_register_chain pid=5049 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:36.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.85:22-10.0.0.1:37068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:36.734383 systemd[1]: Started sshd@12-10.0.0.85:22-10.0.0.1:37068.service. Jul 10 00:36:36.786000 audit[5056]: USER_ACCT pid=5056 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:36.787245 sshd[5056]: Accepted publickey for core from 10.0.0.1 port 37068 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:36:36.788000 audit[5056]: CRED_ACQ pid=5056 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:36.788000 audit[5056]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd3ee1130 a2=3 a3=1 items=0 ppid=1 pid=5056 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:36.788000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:36:36.789107 sshd[5056]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:36.793254 systemd-logind[1302]: New session 13 of user core. Jul 10 00:36:36.794262 systemd[1]: Started session-13.scope. Jul 10 00:36:36.797000 audit[5056]: USER_START pid=5056 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:36.798000 audit[5059]: CRED_ACQ pid=5059 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:37.048181 sshd[5056]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:37.048000 audit[5056]: USER_END pid=5056 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:37.048000 audit[5056]: CRED_DISP pid=5056 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:37.050895 systemd[1]: sshd@12-10.0.0.85:22-10.0.0.1:37068.service: Deactivated successfully. Jul 10 00:36:37.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.85:22-10.0.0.1:37068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:37.051977 systemd-logind[1302]: Session 13 logged out. Waiting for processes to exit. Jul 10 00:36:37.052039 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 00:36:37.052809 systemd-logind[1302]: Removed session 13. Jul 10 00:36:42.051530 systemd[1]: Started sshd@13-10.0.0.85:22-10.0.0.1:37084.service. Jul 10 00:36:42.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.85:22-10.0.0.1:37084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:42.052572 kernel: kauditd_printk_skb: 13 callbacks suppressed Jul 10 00:36:42.052641 kernel: audit: type=1130 audit(1752107802.051:481): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.85:22-10.0.0.1:37084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:42.092000 audit[5078]: USER_ACCT pid=5078 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:42.093316 sshd[5078]: Accepted publickey for core from 10.0.0.1 port 37084 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:36:42.094772 sshd[5078]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:42.093000 audit[5078]: CRED_ACQ pid=5078 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:42.099317 kernel: audit: type=1101 audit(1752107802.092:482): pid=5078 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:42.099382 kernel: audit: type=1103 audit(1752107802.093:483): pid=5078 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:42.099409 kernel: audit: type=1006 audit(1752107802.094:484): pid=5078 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jul 10 00:36:42.101444 kernel: audit: type=1300 audit(1752107802.094:484): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcf4ca340 a2=3 a3=1 items=0 ppid=1 pid=5078 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:42.094000 audit[5078]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcf4ca340 a2=3 a3=1 items=0 ppid=1 pid=5078 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:42.100558 systemd[1]: Started session-14.scope. Jul 10 00:36:42.100583 systemd-logind[1302]: New session 14 of user core. Jul 10 00:36:42.104579 kernel: audit: type=1327 audit(1752107802.094:484): proctitle=737368643A20636F7265205B707269765D Jul 10 00:36:42.094000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:36:42.105000 audit[5078]: USER_START pid=5078 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:42.109669 kernel: audit: type=1105 audit(1752107802.105:485): pid=5078 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:42.106000 audit[5081]: CRED_ACQ pid=5081 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:42.112697 kernel: audit: type=1103 audit(1752107802.106:486): pid=5081 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:42.188699 env[1315]: time="2025-07-10T00:36:42.188653420Z" level=info msg="StopPodSandbox for \"d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6\"" Jul 10 00:36:42.245277 sshd[5078]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:42.245000 audit[5078]: USER_END pid=5078 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:42.247673 systemd[1]: sshd@13-10.0.0.85:22-10.0.0.1:37084.service: Deactivated successfully. Jul 10 00:36:42.248522 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 00:36:42.245000 audit[5078]: CRED_DISP pid=5078 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:42.252052 systemd-logind[1302]: Session 14 logged out. Waiting for processes to exit. Jul 10 00:36:42.253411 kernel: audit: type=1106 audit(1752107802.245:487): pid=5078 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:42.253532 kernel: audit: type=1104 audit(1752107802.245:488): pid=5078 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:42.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.85:22-10.0.0.1:37084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:42.254036 systemd-logind[1302]: Removed session 14. Jul 10 00:36:42.281799 env[1315]: 2025-07-10 00:36:42.242 [WARNING][5101] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" WorkloadEndpoint="localhost-k8s-whisker--666c86c9b9--ksjrb-eth0" Jul 10 00:36:42.281799 env[1315]: 2025-07-10 00:36:42.242 [INFO][5101] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" Jul 10 00:36:42.281799 env[1315]: 2025-07-10 00:36:42.242 [INFO][5101] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" iface="eth0" netns="" Jul 10 00:36:42.281799 env[1315]: 2025-07-10 00:36:42.242 [INFO][5101] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" Jul 10 00:36:42.281799 env[1315]: 2025-07-10 00:36:42.242 [INFO][5101] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" Jul 10 00:36:42.281799 env[1315]: 2025-07-10 00:36:42.266 [INFO][5110] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" HandleID="k8s-pod-network.d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" Workload="localhost-k8s-whisker--666c86c9b9--ksjrb-eth0" Jul 10 00:36:42.281799 env[1315]: 2025-07-10 00:36:42.267 [INFO][5110] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:36:42.281799 env[1315]: 2025-07-10 00:36:42.267 [INFO][5110] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:36:42.281799 env[1315]: 2025-07-10 00:36:42.276 [WARNING][5110] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" HandleID="k8s-pod-network.d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" Workload="localhost-k8s-whisker--666c86c9b9--ksjrb-eth0" Jul 10 00:36:42.281799 env[1315]: 2025-07-10 00:36:42.276 [INFO][5110] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" HandleID="k8s-pod-network.d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" Workload="localhost-k8s-whisker--666c86c9b9--ksjrb-eth0" Jul 10 00:36:42.281799 env[1315]: 2025-07-10 00:36:42.278 [INFO][5110] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:36:42.281799 env[1315]: 2025-07-10 00:36:42.280 [INFO][5101] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" Jul 10 00:36:42.282216 env[1315]: time="2025-07-10T00:36:42.281828141Z" level=info msg="TearDown network for sandbox \"d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6\" successfully" Jul 10 00:36:42.282216 env[1315]: time="2025-07-10T00:36:42.281864181Z" level=info msg="StopPodSandbox for \"d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6\" returns successfully" Jul 10 00:36:42.282611 env[1315]: time="2025-07-10T00:36:42.282587417Z" level=info msg="RemovePodSandbox for \"d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6\"" Jul 10 00:36:42.282741 env[1315]: time="2025-07-10T00:36:42.282701217Z" level=info msg="Forcibly stopping sandbox \"d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6\"" Jul 10 00:36:42.344161 env[1315]: 2025-07-10 00:36:42.314 [WARNING][5132] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" WorkloadEndpoint="localhost-k8s-whisker--666c86c9b9--ksjrb-eth0" Jul 10 00:36:42.344161 env[1315]: 2025-07-10 00:36:42.314 [INFO][5132] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" Jul 10 00:36:42.344161 env[1315]: 2025-07-10 00:36:42.314 [INFO][5132] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" iface="eth0" netns="" Jul 10 00:36:42.344161 env[1315]: 2025-07-10 00:36:42.314 [INFO][5132] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" Jul 10 00:36:42.344161 env[1315]: 2025-07-10 00:36:42.314 [INFO][5132] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" Jul 10 00:36:42.344161 env[1315]: 2025-07-10 00:36:42.331 [INFO][5141] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" HandleID="k8s-pod-network.d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" Workload="localhost-k8s-whisker--666c86c9b9--ksjrb-eth0" Jul 10 00:36:42.344161 env[1315]: 2025-07-10 00:36:42.331 [INFO][5141] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:36:42.344161 env[1315]: 2025-07-10 00:36:42.331 [INFO][5141] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:36:42.344161 env[1315]: 2025-07-10 00:36:42.339 [WARNING][5141] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" HandleID="k8s-pod-network.d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" Workload="localhost-k8s-whisker--666c86c9b9--ksjrb-eth0" Jul 10 00:36:42.344161 env[1315]: 2025-07-10 00:36:42.339 [INFO][5141] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" HandleID="k8s-pod-network.d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" Workload="localhost-k8s-whisker--666c86c9b9--ksjrb-eth0" Jul 10 00:36:42.344161 env[1315]: 2025-07-10 00:36:42.341 [INFO][5141] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:36:42.344161 env[1315]: 2025-07-10 00:36:42.342 [INFO][5132] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6" Jul 10 00:36:42.344679 env[1315]: time="2025-07-10T00:36:42.344632805Z" level=info msg="TearDown network for sandbox \"d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6\" successfully" Jul 10 00:36:42.348113 env[1315]: time="2025-07-10T00:36:42.348068948Z" level=info msg="RemovePodSandbox \"d0aedd1d34742d603e3e8489625e8264ffb2acc07a5eaed9a3e7e8b42c4c40d6\" returns successfully" Jul 10 00:36:42.348884 env[1315]: time="2025-07-10T00:36:42.348830505Z" level=info msg="StopPodSandbox for \"182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0\"" Jul 10 00:36:42.436827 env[1315]: 2025-07-10 00:36:42.385 [WARNING][5159] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--866977c98d--2dcmd-eth0", GenerateName:"calico-apiserver-866977c98d-", Namespace:"calico-apiserver", SelfLink:"", UID:"96a10c6f-bf44-4fd2-abf3-72068d8168d1", ResourceVersion:"1216", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 35, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"866977c98d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ec0235c8d70956911057f9ad003f6e0f262c90d494f37a83d86140aacdc4611", Pod:"calico-apiserver-866977c98d-2dcmd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali08ba20e1536", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:36:42.436827 env[1315]: 2025-07-10 00:36:42.385 [INFO][5159] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" Jul 10 00:36:42.436827 env[1315]: 2025-07-10 00:36:42.385 [INFO][5159] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" iface="eth0" netns="" Jul 10 00:36:42.436827 env[1315]: 2025-07-10 00:36:42.385 [INFO][5159] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" Jul 10 00:36:42.436827 env[1315]: 2025-07-10 00:36:42.385 [INFO][5159] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" Jul 10 00:36:42.436827 env[1315]: 2025-07-10 00:36:42.423 [INFO][5169] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" HandleID="k8s-pod-network.182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" Workload="localhost-k8s-calico--apiserver--866977c98d--2dcmd-eth0" Jul 10 00:36:42.436827 env[1315]: 2025-07-10 00:36:42.423 [INFO][5169] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:36:42.436827 env[1315]: 2025-07-10 00:36:42.424 [INFO][5169] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:36:42.436827 env[1315]: 2025-07-10 00:36:42.432 [WARNING][5169] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" HandleID="k8s-pod-network.182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" Workload="localhost-k8s-calico--apiserver--866977c98d--2dcmd-eth0" Jul 10 00:36:42.436827 env[1315]: 2025-07-10 00:36:42.432 [INFO][5169] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" HandleID="k8s-pod-network.182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" Workload="localhost-k8s-calico--apiserver--866977c98d--2dcmd-eth0" Jul 10 00:36:42.436827 env[1315]: 2025-07-10 00:36:42.433 [INFO][5169] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:36:42.436827 env[1315]: 2025-07-10 00:36:42.435 [INFO][5159] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" Jul 10 00:36:42.437404 env[1315]: time="2025-07-10T00:36:42.437368247Z" level=info msg="TearDown network for sandbox \"182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0\" successfully" Jul 10 00:36:42.437489 env[1315]: time="2025-07-10T00:36:42.437471687Z" level=info msg="StopPodSandbox for \"182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0\" returns successfully" Jul 10 00:36:42.438223 env[1315]: time="2025-07-10T00:36:42.438193323Z" level=info msg="RemovePodSandbox for \"182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0\"" Jul 10 00:36:42.438297 env[1315]: time="2025-07-10T00:36:42.438231683Z" level=info msg="Forcibly stopping sandbox \"182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0\"" Jul 10 00:36:42.510766 env[1315]: 2025-07-10 00:36:42.477 [WARNING][5186] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--866977c98d--2dcmd-eth0", GenerateName:"calico-apiserver-866977c98d-", Namespace:"calico-apiserver", SelfLink:"", UID:"96a10c6f-bf44-4fd2-abf3-72068d8168d1", ResourceVersion:"1216", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 35, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"866977c98d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ec0235c8d70956911057f9ad003f6e0f262c90d494f37a83d86140aacdc4611", Pod:"calico-apiserver-866977c98d-2dcmd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali08ba20e1536", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:36:42.510766 env[1315]: 2025-07-10 00:36:42.477 [INFO][5186] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" Jul 10 00:36:42.510766 env[1315]: 2025-07-10 00:36:42.477 [INFO][5186] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" iface="eth0" netns="" Jul 10 00:36:42.510766 env[1315]: 2025-07-10 00:36:42.477 [INFO][5186] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" Jul 10 00:36:42.510766 env[1315]: 2025-07-10 00:36:42.477 [INFO][5186] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" Jul 10 00:36:42.510766 env[1315]: 2025-07-10 00:36:42.497 [INFO][5194] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" HandleID="k8s-pod-network.182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" Workload="localhost-k8s-calico--apiserver--866977c98d--2dcmd-eth0" Jul 10 00:36:42.510766 env[1315]: 2025-07-10 00:36:42.497 [INFO][5194] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:36:42.510766 env[1315]: 2025-07-10 00:36:42.497 [INFO][5194] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:36:42.510766 env[1315]: 2025-07-10 00:36:42.505 [WARNING][5194] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" HandleID="k8s-pod-network.182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" Workload="localhost-k8s-calico--apiserver--866977c98d--2dcmd-eth0" Jul 10 00:36:42.510766 env[1315]: 2025-07-10 00:36:42.506 [INFO][5194] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" HandleID="k8s-pod-network.182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" Workload="localhost-k8s-calico--apiserver--866977c98d--2dcmd-eth0" Jul 10 00:36:42.510766 env[1315]: 2025-07-10 00:36:42.507 [INFO][5194] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:36:42.510766 env[1315]: 2025-07-10 00:36:42.509 [INFO][5186] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0" Jul 10 00:36:42.511205 env[1315]: time="2025-07-10T00:36:42.510792621Z" level=info msg="TearDown network for sandbox \"182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0\" successfully" Jul 10 00:36:42.513883 env[1315]: time="2025-07-10T00:36:42.513834647Z" level=info msg="RemovePodSandbox \"182b1bf7930e85e0316c7845d03e16f3506a56e10563d89cd8514410db9e4ec0\" returns successfully" Jul 10 00:36:42.514323 env[1315]: time="2025-07-10T00:36:42.514289084Z" level=info msg="StopPodSandbox for \"9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464\"" Jul 10 00:36:42.585496 env[1315]: 2025-07-10 00:36:42.551 [WARNING][5212] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6946f6c79d--88gpt-eth0", GenerateName:"calico-kube-controllers-6946f6c79d-", Namespace:"calico-system", SelfLink:"", UID:"a753c3a5-5f12-42b7-a570-848c45ac60a4", ResourceVersion:"1188", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 36, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6946f6c79d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e50c56e443176676df2e01b1b7c5c641b65014465ce0b76b75cbd160f4349a92", Pod:"calico-kube-controllers-6946f6c79d-88gpt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic772ce8e667", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:36:42.585496 env[1315]: 2025-07-10 00:36:42.551 [INFO][5212] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" Jul 10 00:36:42.585496 env[1315]: 2025-07-10 00:36:42.551 [INFO][5212] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" iface="eth0" netns="" Jul 10 00:36:42.585496 env[1315]: 2025-07-10 00:36:42.551 [INFO][5212] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" Jul 10 00:36:42.585496 env[1315]: 2025-07-10 00:36:42.551 [INFO][5212] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" Jul 10 00:36:42.585496 env[1315]: 2025-07-10 00:36:42.570 [INFO][5221] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" HandleID="k8s-pod-network.9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" Workload="localhost-k8s-calico--kube--controllers--6946f6c79d--88gpt-eth0" Jul 10 00:36:42.585496 env[1315]: 2025-07-10 00:36:42.570 [INFO][5221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:36:42.585496 env[1315]: 2025-07-10 00:36:42.570 [INFO][5221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:36:42.585496 env[1315]: 2025-07-10 00:36:42.579 [WARNING][5221] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" HandleID="k8s-pod-network.9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" Workload="localhost-k8s-calico--kube--controllers--6946f6c79d--88gpt-eth0" Jul 10 00:36:42.585496 env[1315]: 2025-07-10 00:36:42.580 [INFO][5221] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" HandleID="k8s-pod-network.9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" Workload="localhost-k8s-calico--kube--controllers--6946f6c79d--88gpt-eth0" Jul 10 00:36:42.585496 env[1315]: 2025-07-10 00:36:42.582 [INFO][5221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:36:42.585496 env[1315]: 2025-07-10 00:36:42.583 [INFO][5212] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" Jul 10 00:36:42.585496 env[1315]: time="2025-07-10T00:36:42.585340429Z" level=info msg="TearDown network for sandbox \"9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464\" successfully" Jul 10 00:36:42.585496 env[1315]: time="2025-07-10T00:36:42.585396309Z" level=info msg="StopPodSandbox for \"9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464\" returns successfully" Jul 10 00:36:42.585978 env[1315]: time="2025-07-10T00:36:42.585860507Z" level=info msg="RemovePodSandbox for \"9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464\"" Jul 10 00:36:42.585978 env[1315]: time="2025-07-10T00:36:42.585901267Z" level=info msg="Forcibly stopping sandbox \"9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464\"" Jul 10 00:36:42.659636 env[1315]: 2025-07-10 00:36:42.623 [WARNING][5239] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6946f6c79d--88gpt-eth0", GenerateName:"calico-kube-controllers-6946f6c79d-", Namespace:"calico-system", SelfLink:"", UID:"a753c3a5-5f12-42b7-a570-848c45ac60a4", ResourceVersion:"1188", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 36, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6946f6c79d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e50c56e443176676df2e01b1b7c5c641b65014465ce0b76b75cbd160f4349a92", Pod:"calico-kube-controllers-6946f6c79d-88gpt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic772ce8e667", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:36:42.659636 env[1315]: 2025-07-10 00:36:42.623 [INFO][5239] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" Jul 10 00:36:42.659636 env[1315]: 2025-07-10 00:36:42.623 [INFO][5239] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" iface="eth0" netns="" Jul 10 00:36:42.659636 env[1315]: 2025-07-10 00:36:42.623 [INFO][5239] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" Jul 10 00:36:42.659636 env[1315]: 2025-07-10 00:36:42.623 [INFO][5239] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" Jul 10 00:36:42.659636 env[1315]: 2025-07-10 00:36:42.645 [INFO][5248] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" HandleID="k8s-pod-network.9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" Workload="localhost-k8s-calico--kube--controllers--6946f6c79d--88gpt-eth0" Jul 10 00:36:42.659636 env[1315]: 2025-07-10 00:36:42.645 [INFO][5248] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:36:42.659636 env[1315]: 2025-07-10 00:36:42.645 [INFO][5248] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:36:42.659636 env[1315]: 2025-07-10 00:36:42.654 [WARNING][5248] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" HandleID="k8s-pod-network.9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" Workload="localhost-k8s-calico--kube--controllers--6946f6c79d--88gpt-eth0" Jul 10 00:36:42.659636 env[1315]: 2025-07-10 00:36:42.654 [INFO][5248] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" HandleID="k8s-pod-network.9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" Workload="localhost-k8s-calico--kube--controllers--6946f6c79d--88gpt-eth0" Jul 10 00:36:42.659636 env[1315]: 2025-07-10 00:36:42.656 [INFO][5248] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:36:42.659636 env[1315]: 2025-07-10 00:36:42.657 [INFO][5239] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464" Jul 10 00:36:42.659636 env[1315]: time="2025-07-10T00:36:42.659588759Z" level=info msg="TearDown network for sandbox \"9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464\" successfully" Jul 10 00:36:42.663839 env[1315]: time="2025-07-10T00:36:42.663799859Z" level=info msg="RemovePodSandbox \"9ac10abd55ae26fc869bb9f16167a905b344dbbbdd0d71773039291b5e63d464\" returns successfully" Jul 10 00:36:42.664267 env[1315]: time="2025-07-10T00:36:42.664243977Z" level=info msg="StopPodSandbox for \"386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2\"" Jul 10 00:36:42.742711 env[1315]: 2025-07-10 00:36:42.701 [WARNING][5267] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--zl9sp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2b825d2b-e0da-4a05-8c42-0b337b179ba3", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 35, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"54e5600895ff7226c89cb6c54a555a927eec0f757e1c63994945a15be4603a5e", Pod:"coredns-7c65d6cfc9-zl9sp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e5b0a0a761", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:36:42.742711 env[1315]: 2025-07-10 00:36:42.701 [INFO][5267] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" Jul 10 00:36:42.742711 env[1315]: 2025-07-10 00:36:42.702 [INFO][5267] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" iface="eth0" netns="" Jul 10 00:36:42.742711 env[1315]: 2025-07-10 00:36:42.702 [INFO][5267] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" Jul 10 00:36:42.742711 env[1315]: 2025-07-10 00:36:42.702 [INFO][5267] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" Jul 10 00:36:42.742711 env[1315]: 2025-07-10 00:36:42.728 [INFO][5277] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" HandleID="k8s-pod-network.386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" Workload="localhost-k8s-coredns--7c65d6cfc9--zl9sp-eth0" Jul 10 00:36:42.742711 env[1315]: 2025-07-10 00:36:42.728 [INFO][5277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:36:42.742711 env[1315]: 2025-07-10 00:36:42.728 [INFO][5277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:36:42.742711 env[1315]: 2025-07-10 00:36:42.737 [WARNING][5277] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" HandleID="k8s-pod-network.386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" Workload="localhost-k8s-coredns--7c65d6cfc9--zl9sp-eth0" Jul 10 00:36:42.742711 env[1315]: 2025-07-10 00:36:42.737 [INFO][5277] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" HandleID="k8s-pod-network.386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" Workload="localhost-k8s-coredns--7c65d6cfc9--zl9sp-eth0" Jul 10 00:36:42.742711 env[1315]: 2025-07-10 00:36:42.739 [INFO][5277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:36:42.742711 env[1315]: 2025-07-10 00:36:42.741 [INFO][5267] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" Jul 10 00:36:42.743203 env[1315]: time="2025-07-10T00:36:42.742739487Z" level=info msg="TearDown network for sandbox \"386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2\" successfully" Jul 10 00:36:42.743203 env[1315]: time="2025-07-10T00:36:42.742770887Z" level=info msg="StopPodSandbox for \"386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2\" returns successfully" Jul 10 00:36:42.743264 env[1315]: time="2025-07-10T00:36:42.743185725Z" level=info msg="RemovePodSandbox for \"386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2\"" Jul 10 00:36:42.743264 env[1315]: time="2025-07-10T00:36:42.743228285Z" level=info msg="Forcibly stopping sandbox \"386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2\"" Jul 10 00:36:42.813860 env[1315]: 2025-07-10 00:36:42.780 [WARNING][5295] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--zl9sp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2b825d2b-e0da-4a05-8c42-0b337b179ba3", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 35, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"54e5600895ff7226c89cb6c54a555a927eec0f757e1c63994945a15be4603a5e", Pod:"coredns-7c65d6cfc9-zl9sp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e5b0a0a761", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:36:42.813860 env[1315]: 2025-07-10 00:36:42.780 [INFO][5295] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" Jul 10 00:36:42.813860 env[1315]: 2025-07-10 00:36:42.780 [INFO][5295] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" iface="eth0" netns="" Jul 10 00:36:42.813860 env[1315]: 2025-07-10 00:36:42.780 [INFO][5295] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" Jul 10 00:36:42.813860 env[1315]: 2025-07-10 00:36:42.780 [INFO][5295] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" Jul 10 00:36:42.813860 env[1315]: 2025-07-10 00:36:42.799 [INFO][5304] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" HandleID="k8s-pod-network.386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" Workload="localhost-k8s-coredns--7c65d6cfc9--zl9sp-eth0" Jul 10 00:36:42.813860 env[1315]: 2025-07-10 00:36:42.799 [INFO][5304] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:36:42.813860 env[1315]: 2025-07-10 00:36:42.799 [INFO][5304] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:36:42.813860 env[1315]: 2025-07-10 00:36:42.808 [WARNING][5304] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" HandleID="k8s-pod-network.386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" Workload="localhost-k8s-coredns--7c65d6cfc9--zl9sp-eth0" Jul 10 00:36:42.813860 env[1315]: 2025-07-10 00:36:42.808 [INFO][5304] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" HandleID="k8s-pod-network.386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" Workload="localhost-k8s-coredns--7c65d6cfc9--zl9sp-eth0" Jul 10 00:36:42.813860 env[1315]: 2025-07-10 00:36:42.810 [INFO][5304] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:36:42.813860 env[1315]: 2025-07-10 00:36:42.812 [INFO][5295] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2" Jul 10 00:36:42.814321 env[1315]: time="2025-07-10T00:36:42.813895072Z" level=info msg="TearDown network for sandbox \"386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2\" successfully" Jul 10 00:36:42.818318 env[1315]: time="2025-07-10T00:36:42.818282171Z" level=info msg="RemovePodSandbox \"386903a168651e4853c3beeaaa3d9c1e46f4a4214d5df96475d3e79aab0237c2\" returns successfully" Jul 10 00:36:42.818767 env[1315]: time="2025-07-10T00:36:42.818740729Z" level=info msg="StopPodSandbox for \"99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86\"" Jul 10 00:36:42.883262 env[1315]: 2025-07-10 00:36:42.851 [WARNING][5322] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--866977c98d--bvxbn-eth0", GenerateName:"calico-apiserver-866977c98d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2c85e535-ff6b-4896-a1a8-ba8b4bb49ac5", ResourceVersion:"1168", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 35, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"866977c98d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"044aa093e9e11461272206a37bd50161eb2616b8d97f2bee78c4a99ec4c0c216", Pod:"calico-apiserver-866977c98d-bvxbn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d5b822b871", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:36:42.883262 env[1315]: 2025-07-10 00:36:42.851 [INFO][5322] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" Jul 10 00:36:42.883262 env[1315]: 2025-07-10 00:36:42.851 [INFO][5322] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" iface="eth0" netns="" Jul 10 00:36:42.883262 env[1315]: 2025-07-10 00:36:42.851 [INFO][5322] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" Jul 10 00:36:42.883262 env[1315]: 2025-07-10 00:36:42.851 [INFO][5322] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" Jul 10 00:36:42.883262 env[1315]: 2025-07-10 00:36:42.869 [INFO][5331] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" HandleID="k8s-pod-network.99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" Workload="localhost-k8s-calico--apiserver--866977c98d--bvxbn-eth0" Jul 10 00:36:42.883262 env[1315]: 2025-07-10 00:36:42.869 [INFO][5331] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:36:42.883262 env[1315]: 2025-07-10 00:36:42.869 [INFO][5331] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:36:42.883262 env[1315]: 2025-07-10 00:36:42.878 [WARNING][5331] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" HandleID="k8s-pod-network.99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" Workload="localhost-k8s-calico--apiserver--866977c98d--bvxbn-eth0" Jul 10 00:36:42.883262 env[1315]: 2025-07-10 00:36:42.878 [INFO][5331] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" HandleID="k8s-pod-network.99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" Workload="localhost-k8s-calico--apiserver--866977c98d--bvxbn-eth0" Jul 10 00:36:42.883262 env[1315]: 2025-07-10 00:36:42.880 [INFO][5331] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:36:42.883262 env[1315]: 2025-07-10 00:36:42.881 [INFO][5322] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" Jul 10 00:36:42.883720 env[1315]: time="2025-07-10T00:36:42.883310184Z" level=info msg="TearDown network for sandbox \"99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86\" successfully" Jul 10 00:36:42.883720 env[1315]: time="2025-07-10T00:36:42.883341144Z" level=info msg="StopPodSandbox for \"99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86\" returns successfully" Jul 10 00:36:42.883917 env[1315]: time="2025-07-10T00:36:42.883873102Z" level=info msg="RemovePodSandbox for \"99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86\"" Jul 10 00:36:42.883965 env[1315]: time="2025-07-10T00:36:42.883923701Z" level=info msg="Forcibly stopping sandbox \"99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86\"" Jul 10 00:36:42.946824 env[1315]: 2025-07-10 00:36:42.916 [WARNING][5349] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--866977c98d--bvxbn-eth0", GenerateName:"calico-apiserver-866977c98d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2c85e535-ff6b-4896-a1a8-ba8b4bb49ac5", ResourceVersion:"1168", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 35, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"866977c98d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"044aa093e9e11461272206a37bd50161eb2616b8d97f2bee78c4a99ec4c0c216", Pod:"calico-apiserver-866977c98d-bvxbn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d5b822b871", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:36:42.946824 env[1315]: 2025-07-10 00:36:42.916 [INFO][5349] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" Jul 10 00:36:42.946824 env[1315]: 2025-07-10 00:36:42.916 [INFO][5349] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" iface="eth0" netns="" Jul 10 00:36:42.946824 env[1315]: 2025-07-10 00:36:42.916 [INFO][5349] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" Jul 10 00:36:42.946824 env[1315]: 2025-07-10 00:36:42.916 [INFO][5349] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" Jul 10 00:36:42.946824 env[1315]: 2025-07-10 00:36:42.933 [INFO][5358] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" HandleID="k8s-pod-network.99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" Workload="localhost-k8s-calico--apiserver--866977c98d--bvxbn-eth0" Jul 10 00:36:42.946824 env[1315]: 2025-07-10 00:36:42.933 [INFO][5358] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:36:42.946824 env[1315]: 2025-07-10 00:36:42.933 [INFO][5358] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:36:42.946824 env[1315]: 2025-07-10 00:36:42.941 [WARNING][5358] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" HandleID="k8s-pod-network.99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" Workload="localhost-k8s-calico--apiserver--866977c98d--bvxbn-eth0" Jul 10 00:36:42.946824 env[1315]: 2025-07-10 00:36:42.941 [INFO][5358] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" HandleID="k8s-pod-network.99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" Workload="localhost-k8s-calico--apiserver--866977c98d--bvxbn-eth0" Jul 10 00:36:42.946824 env[1315]: 2025-07-10 00:36:42.943 [INFO][5358] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:36:42.946824 env[1315]: 2025-07-10 00:36:42.945 [INFO][5349] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86" Jul 10 00:36:42.946824 env[1315]: time="2025-07-10T00:36:42.946785045Z" level=info msg="TearDown network for sandbox \"99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86\" successfully" Jul 10 00:36:42.950944 env[1315]: time="2025-07-10T00:36:42.950899745Z" level=info msg="RemovePodSandbox \"99d01aa537096b19312a993871dd04efd6ede619e6673197bad81950d155ba86\" returns successfully" Jul 10 00:36:42.951401 env[1315]: time="2025-07-10T00:36:42.951366183Z" level=info msg="StopPodSandbox for \"c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77\"" Jul 10 00:36:43.012714 env[1315]: 2025-07-10 00:36:42.982 [WARNING][5377] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--2plgf-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7ca43f8d-e9b6-493a-b482-3d2dc7232c75", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 35, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6571ea4c14588882565bc031474feccb76a8f356384e912617b88a4107b72646", Pod:"coredns-7c65d6cfc9-2plgf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6516e0cc8a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:36:43.012714 env[1315]: 2025-07-10 00:36:42.982 [INFO][5377] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" Jul 10 00:36:43.012714 env[1315]: 2025-07-10 00:36:42.982 [INFO][5377] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" iface="eth0" netns="" Jul 10 00:36:43.012714 env[1315]: 2025-07-10 00:36:42.982 [INFO][5377] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" Jul 10 00:36:43.012714 env[1315]: 2025-07-10 00:36:42.982 [INFO][5377] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" Jul 10 00:36:43.012714 env[1315]: 2025-07-10 00:36:42.999 [INFO][5386] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" HandleID="k8s-pod-network.c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" Workload="localhost-k8s-coredns--7c65d6cfc9--2plgf-eth0" Jul 10 00:36:43.012714 env[1315]: 2025-07-10 00:36:42.999 [INFO][5386] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:36:43.012714 env[1315]: 2025-07-10 00:36:42.999 [INFO][5386] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:36:43.012714 env[1315]: 2025-07-10 00:36:43.008 [WARNING][5386] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" HandleID="k8s-pod-network.c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" Workload="localhost-k8s-coredns--7c65d6cfc9--2plgf-eth0" Jul 10 00:36:43.012714 env[1315]: 2025-07-10 00:36:43.008 [INFO][5386] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" HandleID="k8s-pod-network.c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" Workload="localhost-k8s-coredns--7c65d6cfc9--2plgf-eth0" Jul 10 00:36:43.012714 env[1315]: 2025-07-10 00:36:43.009 [INFO][5386] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:36:43.012714 env[1315]: 2025-07-10 00:36:43.011 [INFO][5377] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" Jul 10 00:36:43.013333 env[1315]: time="2025-07-10T00:36:43.012903294Z" level=info msg="TearDown network for sandbox \"c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77\" successfully" Jul 10 00:36:43.013402 env[1315]: time="2025-07-10T00:36:43.013383251Z" level=info msg="StopPodSandbox for \"c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77\" returns successfully" Jul 10 00:36:43.014012 env[1315]: time="2025-07-10T00:36:43.013978169Z" level=info msg="RemovePodSandbox for \"c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77\"" Jul 10 00:36:43.014066 env[1315]: time="2025-07-10T00:36:43.014015928Z" level=info msg="Forcibly stopping sandbox \"c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77\"" Jul 10 00:36:43.075387 env[1315]: 2025-07-10 00:36:43.045 [WARNING][5403] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--2plgf-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7ca43f8d-e9b6-493a-b482-3d2dc7232c75", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 35, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6571ea4c14588882565bc031474feccb76a8f356384e912617b88a4107b72646", Pod:"coredns-7c65d6cfc9-2plgf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6516e0cc8a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:36:43.075387 env[1315]: 2025-07-10 00:36:43.046 [INFO][5403] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" Jul 10 00:36:43.075387 env[1315]: 2025-07-10 00:36:43.046 [INFO][5403] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" iface="eth0" netns="" Jul 10 00:36:43.075387 env[1315]: 2025-07-10 00:36:43.046 [INFO][5403] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" Jul 10 00:36:43.075387 env[1315]: 2025-07-10 00:36:43.046 [INFO][5403] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" Jul 10 00:36:43.075387 env[1315]: 2025-07-10 00:36:43.062 [INFO][5413] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" HandleID="k8s-pod-network.c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" Workload="localhost-k8s-coredns--7c65d6cfc9--2plgf-eth0" Jul 10 00:36:43.075387 env[1315]: 2025-07-10 00:36:43.062 [INFO][5413] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:36:43.075387 env[1315]: 2025-07-10 00:36:43.062 [INFO][5413] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:36:43.075387 env[1315]: 2025-07-10 00:36:43.070 [WARNING][5413] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" HandleID="k8s-pod-network.c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" Workload="localhost-k8s-coredns--7c65d6cfc9--2plgf-eth0" Jul 10 00:36:43.075387 env[1315]: 2025-07-10 00:36:43.070 [INFO][5413] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" HandleID="k8s-pod-network.c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" Workload="localhost-k8s-coredns--7c65d6cfc9--2plgf-eth0" Jul 10 00:36:43.075387 env[1315]: 2025-07-10 00:36:43.071 [INFO][5413] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:36:43.075387 env[1315]: 2025-07-10 00:36:43.073 [INFO][5403] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77" Jul 10 00:36:43.075819 env[1315]: time="2025-07-10T00:36:43.075401922Z" level=info msg="TearDown network for sandbox \"c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77\" successfully" Jul 10 00:36:43.078338 env[1315]: time="2025-07-10T00:36:43.078310549Z" level=info msg="RemovePodSandbox \"c321f17d1114a53d5a0d7baa48b36c5ed263a5a1c2160e6ea49b0c21b7be1b77\" returns successfully" Jul 10 00:36:43.078845 env[1315]: time="2025-07-10T00:36:43.078807986Z" level=info msg="StopPodSandbox for \"4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9\"" Jul 10 00:36:43.140808 env[1315]: 2025-07-10 00:36:43.110 [WARNING][5432] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--ltbnn-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"fd0d6de4-b13e-4994-9b29-6ae2a2c1a419", ResourceVersion:"1224", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 36, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6d23b2e04a267d411b87a27ff984e2044b97fcdabb4994cb58b7f26ba1f33fad", Pod:"goldmane-58fd7646b9-ltbnn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6c37deeddec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:36:43.140808 env[1315]: 2025-07-10 00:36:43.110 [INFO][5432] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" Jul 10 00:36:43.140808 env[1315]: 2025-07-10 00:36:43.110 [INFO][5432] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" iface="eth0" netns="" Jul 10 00:36:43.140808 env[1315]: 2025-07-10 00:36:43.110 [INFO][5432] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" Jul 10 00:36:43.140808 env[1315]: 2025-07-10 00:36:43.110 [INFO][5432] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" Jul 10 00:36:43.140808 env[1315]: 2025-07-10 00:36:43.127 [INFO][5441] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" HandleID="k8s-pod-network.4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" Workload="localhost-k8s-goldmane--58fd7646b9--ltbnn-eth0" Jul 10 00:36:43.140808 env[1315]: 2025-07-10 00:36:43.127 [INFO][5441] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:36:43.140808 env[1315]: 2025-07-10 00:36:43.127 [INFO][5441] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:36:43.140808 env[1315]: 2025-07-10 00:36:43.136 [WARNING][5441] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" HandleID="k8s-pod-network.4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" Workload="localhost-k8s-goldmane--58fd7646b9--ltbnn-eth0" Jul 10 00:36:43.140808 env[1315]: 2025-07-10 00:36:43.136 [INFO][5441] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" HandleID="k8s-pod-network.4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" Workload="localhost-k8s-goldmane--58fd7646b9--ltbnn-eth0" Jul 10 00:36:43.140808 env[1315]: 2025-07-10 00:36:43.137 [INFO][5441] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:36:43.140808 env[1315]: 2025-07-10 00:36:43.139 [INFO][5432] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" Jul 10 00:36:43.141312 env[1315]: time="2025-07-10T00:36:43.141278135Z" level=info msg="TearDown network for sandbox \"4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9\" successfully" Jul 10 00:36:43.141379 env[1315]: time="2025-07-10T00:36:43.141364455Z" level=info msg="StopPodSandbox for \"4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9\" returns successfully" Jul 10 00:36:43.141888 env[1315]: time="2025-07-10T00:36:43.141861572Z" level=info msg="RemovePodSandbox for \"4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9\"" Jul 10 00:36:43.142050 env[1315]: time="2025-07-10T00:36:43.142009292Z" level=info msg="Forcibly stopping sandbox \"4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9\"" Jul 10 00:36:43.206075 env[1315]: 2025-07-10 00:36:43.174 [WARNING][5460] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--ltbnn-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"fd0d6de4-b13e-4994-9b29-6ae2a2c1a419", ResourceVersion:"1224", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 36, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6d23b2e04a267d411b87a27ff984e2044b97fcdabb4994cb58b7f26ba1f33fad", Pod:"goldmane-58fd7646b9-ltbnn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6c37deeddec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:36:43.206075 env[1315]: 2025-07-10 00:36:43.174 [INFO][5460] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" Jul 10 00:36:43.206075 env[1315]: 2025-07-10 00:36:43.174 [INFO][5460] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" iface="eth0" netns="" Jul 10 00:36:43.206075 env[1315]: 2025-07-10 00:36:43.174 [INFO][5460] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" Jul 10 00:36:43.206075 env[1315]: 2025-07-10 00:36:43.174 [INFO][5460] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" Jul 10 00:36:43.206075 env[1315]: 2025-07-10 00:36:43.190 [INFO][5469] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" HandleID="k8s-pod-network.4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" Workload="localhost-k8s-goldmane--58fd7646b9--ltbnn-eth0" Jul 10 00:36:43.206075 env[1315]: 2025-07-10 00:36:43.191 [INFO][5469] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:36:43.206075 env[1315]: 2025-07-10 00:36:43.191 [INFO][5469] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:36:43.206075 env[1315]: 2025-07-10 00:36:43.201 [WARNING][5469] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" HandleID="k8s-pod-network.4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" Workload="localhost-k8s-goldmane--58fd7646b9--ltbnn-eth0" Jul 10 00:36:43.206075 env[1315]: 2025-07-10 00:36:43.201 [INFO][5469] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" HandleID="k8s-pod-network.4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" Workload="localhost-k8s-goldmane--58fd7646b9--ltbnn-eth0" Jul 10 00:36:43.206075 env[1315]: 2025-07-10 00:36:43.202 [INFO][5469] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:36:43.206075 env[1315]: 2025-07-10 00:36:43.204 [INFO][5460] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9" Jul 10 00:36:43.206795 env[1315]: time="2025-07-10T00:36:43.206758670Z" level=info msg="TearDown network for sandbox \"4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9\" successfully" Jul 10 00:36:43.209939 env[1315]: time="2025-07-10T00:36:43.209906855Z" level=info msg="RemovePodSandbox \"4c5a45d85762d922da643868888fdbf4786b705d0bfe915a994bb46e0808a1b9\" returns successfully" Jul 10 00:36:43.210530 env[1315]: time="2025-07-10T00:36:43.210500292Z" level=info msg="StopPodSandbox for \"1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198\"" Jul 10 00:36:43.273687 env[1315]: 2025-07-10 00:36:43.244 [WARNING][5488] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fxvwp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d132b5af-8e1a-4884-a0af-6e4f358a849a", ResourceVersion:"1213", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 36, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3f49ede495569ac49de4d859606ab87018c37d1229ccf1639377e9b6f81f1c1e", Pod:"csi-node-driver-fxvwp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calife3656df14d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:36:43.273687 env[1315]: 2025-07-10 00:36:43.244 [INFO][5488] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" Jul 10 00:36:43.273687 env[1315]: 2025-07-10 00:36:43.244 [INFO][5488] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" iface="eth0" netns="" Jul 10 00:36:43.273687 env[1315]: 2025-07-10 00:36:43.244 [INFO][5488] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" Jul 10 00:36:43.273687 env[1315]: 2025-07-10 00:36:43.244 [INFO][5488] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" Jul 10 00:36:43.273687 env[1315]: 2025-07-10 00:36:43.261 [INFO][5498] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" HandleID="k8s-pod-network.1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" Workload="localhost-k8s-csi--node--driver--fxvwp-eth0" Jul 10 00:36:43.273687 env[1315]: 2025-07-10 00:36:43.261 [INFO][5498] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:36:43.273687 env[1315]: 2025-07-10 00:36:43.261 [INFO][5498] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:36:43.273687 env[1315]: 2025-07-10 00:36:43.269 [WARNING][5498] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" HandleID="k8s-pod-network.1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" Workload="localhost-k8s-csi--node--driver--fxvwp-eth0" Jul 10 00:36:43.273687 env[1315]: 2025-07-10 00:36:43.269 [INFO][5498] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" HandleID="k8s-pod-network.1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" Workload="localhost-k8s-csi--node--driver--fxvwp-eth0" Jul 10 00:36:43.273687 env[1315]: 2025-07-10 00:36:43.270 [INFO][5498] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:36:43.273687 env[1315]: 2025-07-10 00:36:43.272 [INFO][5488] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" Jul 10 00:36:43.274137 env[1315]: time="2025-07-10T00:36:43.273717117Z" level=info msg="TearDown network for sandbox \"1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198\" successfully" Jul 10 00:36:43.274137 env[1315]: time="2025-07-10T00:36:43.273746677Z" level=info msg="StopPodSandbox for \"1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198\" returns successfully" Jul 10 00:36:43.274187 env[1315]: time="2025-07-10T00:36:43.274142955Z" level=info msg="RemovePodSandbox for \"1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198\"" Jul 10 00:36:43.274231 env[1315]: time="2025-07-10T00:36:43.274190035Z" level=info msg="Forcibly stopping sandbox \"1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198\"" Jul 10 00:36:43.335061 env[1315]: 2025-07-10 00:36:43.304 [WARNING][5516] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fxvwp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d132b5af-8e1a-4884-a0af-6e4f358a849a", ResourceVersion:"1213", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 36, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3f49ede495569ac49de4d859606ab87018c37d1229ccf1639377e9b6f81f1c1e", Pod:"csi-node-driver-fxvwp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calife3656df14d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:36:43.335061 env[1315]: 2025-07-10 00:36:43.305 [INFO][5516] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" Jul 10 00:36:43.335061 env[1315]: 2025-07-10 00:36:43.305 [INFO][5516] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" iface="eth0" netns="" Jul 10 00:36:43.335061 env[1315]: 2025-07-10 00:36:43.305 [INFO][5516] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" Jul 10 00:36:43.335061 env[1315]: 2025-07-10 00:36:43.305 [INFO][5516] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" Jul 10 00:36:43.335061 env[1315]: 2025-07-10 00:36:43.322 [INFO][5525] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" HandleID="k8s-pod-network.1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" Workload="localhost-k8s-csi--node--driver--fxvwp-eth0" Jul 10 00:36:43.335061 env[1315]: 2025-07-10 00:36:43.322 [INFO][5525] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:36:43.335061 env[1315]: 2025-07-10 00:36:43.322 [INFO][5525] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:36:43.335061 env[1315]: 2025-07-10 00:36:43.330 [WARNING][5525] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" HandleID="k8s-pod-network.1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" Workload="localhost-k8s-csi--node--driver--fxvwp-eth0" Jul 10 00:36:43.335061 env[1315]: 2025-07-10 00:36:43.330 [INFO][5525] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" HandleID="k8s-pod-network.1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" Workload="localhost-k8s-csi--node--driver--fxvwp-eth0" Jul 10 00:36:43.335061 env[1315]: 2025-07-10 00:36:43.331 [INFO][5525] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:36:43.335061 env[1315]: 2025-07-10 00:36:43.333 [INFO][5516] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198" Jul 10 00:36:43.335589 env[1315]: time="2025-07-10T00:36:43.335553069Z" level=info msg="TearDown network for sandbox \"1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198\" successfully" Jul 10 00:36:43.338715 env[1315]: time="2025-07-10T00:36:43.338682694Z" level=info msg="RemovePodSandbox \"1be47961cc095535c629131f49e6bb9ff4b01bcacec3c2b7811f3f2756714198\" returns successfully" Jul 10 00:36:47.248167 systemd[1]: Started sshd@14-10.0.0.85:22-10.0.0.1:33990.service. Jul 10 00:36:47.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.85:22-10.0.0.1:33990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:47.249092 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 10 00:36:47.249146 kernel: audit: type=1130 audit(1752107807.247:490): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.85:22-10.0.0.1:33990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:47.294000 audit[5554]: USER_ACCT pid=5554 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:47.295094 sshd[5554]: Accepted publickey for core from 10.0.0.1 port 33990 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:36:47.296660 sshd[5554]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:47.295000 audit[5554]: CRED_ACQ pid=5554 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:47.300937 kernel: audit: type=1101 audit(1752107807.294:491): pid=5554 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:47.300993 kernel: audit: type=1103 audit(1752107807.295:492): pid=5554 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:47.302908 kernel: audit: type=1006 audit(1752107807.295:493): pid=5554 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jul 10 00:36:47.295000 audit[5554]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff79ec600 a2=3 a3=1 items=0 ppid=1 pid=5554 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:47.305146 systemd-logind[1302]: New session 15 of user core. Jul 10 00:36:47.305658 systemd[1]: Started session-15.scope. Jul 10 00:36:47.306330 kernel: audit: type=1300 audit(1752107807.295:493): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff79ec600 a2=3 a3=1 items=0 ppid=1 pid=5554 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:47.306465 kernel: audit: type=1327 audit(1752107807.295:493): proctitle=737368643A20636F7265205B707269765D Jul 10 00:36:47.295000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:36:47.309000 audit[5554]: USER_START pid=5554 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:47.310000 audit[5557]: CRED_ACQ pid=5557 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:47.313487 kernel: audit: type=1105 audit(1752107807.309:494): pid=5554 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:47.316446 kernel: audit: type=1103 audit(1752107807.310:495): pid=5557 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:47.442307 sshd[5554]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:47.442000 audit[5554]: USER_END pid=5554 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:47.445422 systemd[1]: sshd@14-10.0.0.85:22-10.0.0.1:33990.service: Deactivated successfully. Jul 10 00:36:47.446503 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 00:36:47.446511 systemd-logind[1302]: Session 15 logged out. Waiting for processes to exit. Jul 10 00:36:47.447667 systemd-logind[1302]: Removed session 15. Jul 10 00:36:47.442000 audit[5554]: CRED_DISP pid=5554 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:47.450679 kernel: audit: type=1106 audit(1752107807.442:496): pid=5554 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:47.450759 kernel: audit: type=1104 audit(1752107807.442:497): pid=5554 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:47.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.85:22-10.0.0.1:33990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:52.445710 systemd[1]: Started sshd@15-10.0.0.85:22-10.0.0.1:33998.service. Jul 10 00:36:52.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.85:22-10.0.0.1:33998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:52.446814 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 10 00:36:52.446892 kernel: audit: type=1130 audit(1752107812.445:499): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.85:22-10.0.0.1:33998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:52.529000 audit[5592]: USER_ACCT pid=5592 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:52.529702 sshd[5592]: Accepted publickey for core from 10.0.0.1 port 33998 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:36:52.530895 sshd[5592]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:52.530000 audit[5592]: CRED_ACQ pid=5592 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:52.535625 kernel: audit: type=1101 audit(1752107812.529:500): pid=5592 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:52.535702 kernel: audit: type=1103 audit(1752107812.530:501): pid=5592 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:52.535846 systemd[1]: Started session-16.scope. Jul 10 00:36:52.536050 systemd-logind[1302]: New session 16 of user core. Jul 10 00:36:52.537657 kernel: audit: type=1006 audit(1752107812.530:502): pid=5592 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jul 10 00:36:52.530000 audit[5592]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeb142d20 a2=3 a3=1 items=0 ppid=1 pid=5592 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:52.540971 kernel: audit: type=1300 audit(1752107812.530:502): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeb142d20 a2=3 a3=1 items=0 ppid=1 pid=5592 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:52.541046 kernel: audit: type=1327 audit(1752107812.530:502): proctitle=737368643A20636F7265205B707269765D Jul 10 00:36:52.530000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:36:52.539000 audit[5592]: USER_START pid=5592 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:52.545755 kernel: audit: type=1105 audit(1752107812.539:503): pid=5592 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:52.541000 audit[5595]: CRED_ACQ pid=5595 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:52.549171 kernel: audit: type=1103 audit(1752107812.541:504): pid=5595 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:52.759645 sshd[5592]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:52.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.85:22-10.0.0.1:40500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:52.760937 systemd[1]: Started sshd@16-10.0.0.85:22-10.0.0.1:40500.service. Jul 10 00:36:52.760000 audit[5592]: USER_END pid=5592 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:52.766075 systemd-logind[1302]: Session 16 logged out. Waiting for processes to exit. Jul 10 00:36:52.767313 systemd[1]: sshd@15-10.0.0.85:22-10.0.0.1:33998.service: Deactivated successfully. Jul 10 00:36:52.767776 kernel: audit: type=1130 audit(1752107812.760:505): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.85:22-10.0.0.1:40500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:52.767849 kernel: audit: type=1106 audit(1752107812.760:506): pid=5592 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:52.760000 audit[5592]: CRED_DISP pid=5592 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:52.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.85:22-10.0.0.1:33998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:52.768284 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 00:36:52.769183 systemd-logind[1302]: Removed session 16. Jul 10 00:36:52.803000 audit[5604]: USER_ACCT pid=5604 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:52.804507 sshd[5604]: Accepted publickey for core from 10.0.0.1 port 40500 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:36:52.805000 audit[5604]: CRED_ACQ pid=5604 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:52.805000 audit[5604]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe4dab5b0 a2=3 a3=1 items=0 ppid=1 pid=5604 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:52.805000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:36:52.805892 sshd[5604]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:52.811150 systemd[1]: Started session-17.scope. Jul 10 00:36:52.811351 systemd-logind[1302]: New session 17 of user core. Jul 10 00:36:52.814000 audit[5604]: USER_START pid=5604 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:52.816000 audit[5609]: CRED_ACQ pid=5609 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:53.036832 sshd[5604]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:53.038000 audit[5604]: USER_END pid=5604 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:53.038000 audit[5604]: CRED_DISP pid=5604 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:53.039573 systemd[1]: Started sshd@17-10.0.0.85:22-10.0.0.1:40510.service. Jul 10 00:36:53.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.85:22-10.0.0.1:40510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:53.042027 systemd[1]: sshd@16-10.0.0.85:22-10.0.0.1:40500.service: Deactivated successfully. Jul 10 00:36:53.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.85:22-10.0.0.1:40500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:53.043124 systemd-logind[1302]: Session 17 logged out. Waiting for processes to exit. Jul 10 00:36:53.043181 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 00:36:53.043930 systemd-logind[1302]: Removed session 17. Jul 10 00:36:53.087000 audit[5616]: USER_ACCT pid=5616 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:53.088151 sshd[5616]: Accepted publickey for core from 10.0.0.1 port 40510 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:36:53.088000 audit[5616]: CRED_ACQ pid=5616 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:53.088000 audit[5616]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdb613420 a2=3 a3=1 items=0 ppid=1 pid=5616 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:53.088000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:36:53.089385 sshd[5616]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:53.092706 systemd-logind[1302]: New session 18 of user core. Jul 10 00:36:53.093578 systemd[1]: Started session-18.scope. Jul 10 00:36:53.096000 audit[5616]: USER_START pid=5616 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:53.098000 audit[5621]: CRED_ACQ pid=5621 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:55.028000 audit[5634]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=5634 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:55.028000 audit[5634]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11944 a0=3 a1=ffffc86c4d30 a2=0 a3=1 items=0 ppid=2205 pid=5634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:55.028000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:55.033058 systemd[1]: Started sshd@18-10.0.0.85:22-10.0.0.1:40512.service. Jul 10 00:36:55.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.85:22-10.0.0.1:40512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:55.035469 sshd[5616]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:55.037000 audit[5616]: USER_END pid=5616 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:55.037000 audit[5616]: CRED_DISP pid=5616 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:55.037000 audit[5634]: NETFILTER_CFG table=nat:133 family=2 entries=26 op=nft_register_rule pid=5634 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:55.037000 audit[5634]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8076 a0=3 a1=ffffc86c4d30 a2=0 a3=1 items=0 ppid=2205 pid=5634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:55.037000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:55.039368 systemd[1]: sshd@17-10.0.0.85:22-10.0.0.1:40510.service: Deactivated successfully. Jul 10 00:36:55.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.85:22-10.0.0.1:40510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:55.040546 systemd-logind[1302]: Session 18 logged out. Waiting for processes to exit. Jul 10 00:36:55.040625 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 00:36:55.041761 systemd-logind[1302]: Removed session 18. Jul 10 00:36:55.055000 audit[5645]: NETFILTER_CFG table=filter:134 family=2 entries=32 op=nft_register_rule pid=5645 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:55.055000 audit[5645]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11944 a0=3 a1=ffffc21d9ec0 a2=0 a3=1 items=0 ppid=2205 pid=5645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:55.055000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:55.060000 audit[5645]: NETFILTER_CFG table=nat:135 family=2 entries=26 op=nft_register_rule pid=5645 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:36:55.060000 audit[5645]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8076 a0=3 a1=ffffc21d9ec0 a2=0 a3=1 items=0 ppid=2205 pid=5645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:55.060000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:36:55.072869 systemd[1]: run-containerd-runc-k8s.io-e46f071d032893f99d27d752fb23cf9f2cff7644979c2115796411dfd015505d-runc.XI1R3J.mount: Deactivated successfully. Jul 10 00:36:55.107000 audit[5635]: USER_ACCT pid=5635 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:55.108802 sshd[5635]: Accepted publickey for core from 10.0.0.1 port 40512 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:36:55.111000 audit[5635]: CRED_ACQ pid=5635 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:55.111000 audit[5635]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffc159f10 a2=3 a3=1 items=0 ppid=1 pid=5635 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:55.111000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:36:55.112123 sshd[5635]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:55.120347 systemd-logind[1302]: New session 19 of user core. Jul 10 00:36:55.121206 systemd[1]: Started session-19.scope. Jul 10 00:36:55.127000 audit[5635]: USER_START pid=5635 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:55.129000 audit[5663]: CRED_ACQ pid=5663 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:55.670743 sshd[5635]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:55.671000 audit[5635]: USER_END pid=5635 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:55.671000 audit[5635]: CRED_DISP pid=5635 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:55.673199 systemd[1]: Started sshd@19-10.0.0.85:22-10.0.0.1:40518.service. Jul 10 00:36:55.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.85:22-10.0.0.1:40518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:55.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.85:22-10.0.0.1:40512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:55.673733 systemd[1]: sshd@18-10.0.0.85:22-10.0.0.1:40512.service: Deactivated successfully. Jul 10 00:36:55.674897 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 00:36:55.674909 systemd-logind[1302]: Session 19 logged out. Waiting for processes to exit. Jul 10 00:36:55.675809 systemd-logind[1302]: Removed session 19. Jul 10 00:36:55.718000 audit[5672]: USER_ACCT pid=5672 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:55.719612 sshd[5672]: Accepted publickey for core from 10.0.0.1 port 40518 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:36:55.720000 audit[5672]: CRED_ACQ pid=5672 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:55.720000 audit[5672]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdd3deed0 a2=3 a3=1 items=0 ppid=1 pid=5672 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:36:55.720000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:36:55.721124 sshd[5672]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:55.724982 systemd-logind[1302]: New session 20 of user core. Jul 10 00:36:55.725827 systemd[1]: Started session-20.scope. Jul 10 00:36:55.729000 audit[5672]: USER_START pid=5672 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:55.730000 audit[5677]: CRED_ACQ pid=5677 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:55.868510 sshd[5672]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:55.869000 audit[5672]: USER_END pid=5672 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:55.869000 audit[5672]: CRED_DISP pid=5672 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:36:55.871528 systemd-logind[1302]: Session 20 logged out. Waiting for processes to exit. Jul 10 00:36:55.872340 systemd[1]: sshd@19-10.0.0.85:22-10.0.0.1:40518.service: Deactivated successfully. Jul 10 00:36:55.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.85:22-10.0.0.1:40518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:55.873255 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 00:36:55.874621 systemd-logind[1302]: Removed session 20. Jul 10 00:37:00.499000 audit[5695]: NETFILTER_CFG table=filter:136 family=2 entries=20 op=nft_register_rule pid=5695 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:37:00.503135 kernel: kauditd_printk_skb: 57 callbacks suppressed Jul 10 00:37:00.503228 kernel: audit: type=1325 audit(1752107820.499:548): table=filter:136 family=2 entries=20 op=nft_register_rule pid=5695 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:37:00.503254 kernel: audit: type=1300 audit(1752107820.499:548): arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffe1801960 a2=0 a3=1 items=0 ppid=2205 pid=5695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:37:00.499000 audit[5695]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3016 a0=3 a1=ffffe1801960 a2=0 a3=1 items=0 ppid=2205 pid=5695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:37:00.499000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:37:00.508994 kernel: audit: type=1327 audit(1752107820.499:548): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:37:00.510000 audit[5695]: NETFILTER_CFG table=nat:137 family=2 entries=110 op=nft_register_chain pid=5695 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:37:00.510000 audit[5695]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=50988 a0=3 a1=ffffe1801960 a2=0 a3=1 items=0 ppid=2205 pid=5695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:37:00.517313 kernel: audit: type=1325 audit(1752107820.510:549): table=nat:137 family=2 entries=110 op=nft_register_chain pid=5695 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 00:37:00.517365 kernel: audit: type=1300 audit(1752107820.510:549): arch=c00000b7 syscall=211 success=yes exit=50988 a0=3 a1=ffffe1801960 a2=0 a3=1 items=0 ppid=2205 pid=5695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:37:00.517394 kernel: audit: type=1327 audit(1752107820.510:549): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:37:00.510000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 00:37:00.870419 systemd[1]: Started sshd@20-10.0.0.85:22-10.0.0.1:40520.service. Jul 10 00:37:00.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.85:22-10.0.0.1:40520 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:00.874467 kernel: audit: type=1130 audit(1752107820.870:550): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.85:22-10.0.0.1:40520 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:00.911000 audit[5697]: USER_ACCT pid=5697 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:00.912616 sshd[5697]: Accepted publickey for core from 10.0.0.1 port 40520 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:37:00.914480 sshd[5697]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:37:00.913000 audit[5697]: CRED_ACQ pid=5697 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:00.918842 kernel: audit: type=1101 audit(1752107820.911:551): pid=5697 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:00.918905 kernel: audit: type=1103 audit(1752107820.913:552): pid=5697 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:00.918931 kernel: audit: type=1006 audit(1752107820.913:553): pid=5697 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jul 10 00:37:00.919150 systemd-logind[1302]: New session 21 of user core. Jul 10 00:37:00.919328 systemd[1]: Started session-21.scope. Jul 10 00:37:00.913000 audit[5697]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffda590750 a2=3 a3=1 items=0 ppid=1 pid=5697 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:37:00.913000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:37:00.922000 audit[5697]: USER_START pid=5697 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:00.924000 audit[5700]: CRED_ACQ pid=5700 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:01.033995 sshd[5697]: pam_unix(sshd:session): session closed for user core Jul 10 00:37:01.034000 audit[5697]: USER_END pid=5697 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:01.034000 audit[5697]: CRED_DISP pid=5697 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:01.037195 systemd-logind[1302]: Session 21 logged out. Waiting for processes to exit. Jul 10 00:37:01.037408 systemd[1]: sshd@20-10.0.0.85:22-10.0.0.1:40520.service: Deactivated successfully. Jul 10 00:37:01.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.85:22-10.0.0.1:40520 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.038284 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 00:37:01.038684 systemd-logind[1302]: Removed session 21. Jul 10 00:37:02.203535 kubelet[2094]: E0710 00:37:02.203500 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:06.037480 systemd[1]: Started sshd@21-10.0.0.85:22-10.0.0.1:33798.service. Jul 10 00:37:06.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.85:22-10.0.0.1:33798 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:06.041171 kernel: kauditd_printk_skb: 7 callbacks suppressed Jul 10 00:37:06.041236 kernel: audit: type=1130 audit(1752107826.036:559): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.85:22-10.0.0.1:33798 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:06.078000 audit[5712]: USER_ACCT pid=5712 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:06.079763 sshd[5712]: Accepted publickey for core from 10.0.0.1 port 33798 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:37:06.081364 sshd[5712]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:37:06.079000 audit[5712]: CRED_ACQ pid=5712 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:06.085475 systemd[1]: Started session-22.scope. Jul 10 00:37:06.085561 systemd-logind[1302]: New session 22 of user core. Jul 10 00:37:06.086122 kernel: audit: type=1101 audit(1752107826.078:560): pid=5712 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:06.086164 kernel: audit: type=1103 audit(1752107826.079:561): pid=5712 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:06.086181 kernel: audit: type=1006 audit(1752107826.079:562): pid=5712 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jul 10 00:37:06.079000 audit[5712]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff1340b30 a2=3 a3=1 items=0 ppid=1 pid=5712 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:37:06.091450 kernel: audit: type=1300 audit(1752107826.079:562): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff1340b30 a2=3 a3=1 items=0 ppid=1 pid=5712 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:37:06.079000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:37:06.092644 kernel: audit: type=1327 audit(1752107826.079:562): proctitle=737368643A20636F7265205B707269765D Jul 10 00:37:06.088000 audit[5712]: USER_START pid=5712 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:06.096441 kernel: audit: type=1105 audit(1752107826.088:563): pid=5712 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:06.089000 audit[5715]: CRED_ACQ pid=5715 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:06.099438 kernel: audit: type=1103 audit(1752107826.089:564): pid=5715 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:06.196758 sshd[5712]: pam_unix(sshd:session): session closed for user core Jul 10 00:37:06.196000 audit[5712]: USER_END pid=5712 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:06.199175 systemd[1]: sshd@21-10.0.0.85:22-10.0.0.1:33798.service: Deactivated successfully. Jul 10 00:37:06.200361 systemd-logind[1302]: Session 22 logged out. Waiting for processes to exit. Jul 10 00:37:06.200451 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 00:37:06.201270 systemd-logind[1302]: Removed session 22. Jul 10 00:37:06.196000 audit[5712]: CRED_DISP pid=5712 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:06.209263 kernel: audit: type=1106 audit(1752107826.196:565): pid=5712 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:06.209317 kernel: audit: type=1104 audit(1752107826.196:566): pid=5712 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:06.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.85:22-10.0.0.1:33798 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:11.199914 systemd[1]: Started sshd@22-10.0.0.85:22-10.0.0.1:33804.service. Jul 10 00:37:11.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.85:22-10.0.0.1:33804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:11.200995 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 10 00:37:11.201070 kernel: audit: type=1130 audit(1752107831.198:568): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.85:22-10.0.0.1:33804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:11.240000 audit[5750]: USER_ACCT pid=5750 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:11.242640 sshd[5750]: Accepted publickey for core from 10.0.0.1 port 33804 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:37:11.243460 sshd[5750]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:37:11.241000 audit[5750]: CRED_ACQ pid=5750 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:11.248170 kernel: audit: type=1101 audit(1752107831.240:569): pid=5750 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:11.248245 kernel: audit: type=1103 audit(1752107831.241:570): pid=5750 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:11.250260 kernel: audit: type=1006 audit(1752107831.241:571): pid=5750 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jul 10 00:37:11.241000 audit[5750]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff744fe90 a2=3 a3=1 items=0 ppid=1 pid=5750 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:37:11.253705 kernel: audit: type=1300 audit(1752107831.241:571): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff744fe90 a2=3 a3=1 items=0 ppid=1 pid=5750 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:37:11.241000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:37:11.255270 kernel: audit: type=1327 audit(1752107831.241:571): proctitle=737368643A20636F7265205B707269765D Jul 10 00:37:11.258139 systemd[1]: Started session-23.scope. Jul 10 00:37:11.259077 systemd-logind[1302]: New session 23 of user core. Jul 10 00:37:11.261000 audit[5750]: USER_START pid=5750 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:11.263000 audit[5753]: CRED_ACQ pid=5753 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:11.269872 kernel: audit: type=1105 audit(1752107831.261:572): pid=5750 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:11.269961 kernel: audit: type=1103 audit(1752107831.263:573): pid=5753 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:11.386369 sshd[5750]: pam_unix(sshd:session): session closed for user core Jul 10 00:37:11.385000 audit[5750]: USER_END pid=5750 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:11.388757 systemd[1]: sshd@22-10.0.0.85:22-10.0.0.1:33804.service: Deactivated successfully. Jul 10 00:37:11.389667 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 00:37:11.385000 audit[5750]: CRED_DISP pid=5750 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:11.394564 kernel: audit: type=1106 audit(1752107831.385:574): pid=5750 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:11.394624 kernel: audit: type=1104 audit(1752107831.385:575): pid=5750 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:11.391863 systemd-logind[1302]: Session 23 logged out. Waiting for processes to exit. Jul 10 00:37:11.392742 systemd-logind[1302]: Removed session 23. Jul 10 00:37:11.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.85:22-10.0.0.1:33804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:13.203047 kubelet[2094]: E0710 00:37:13.202995 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:13.203411 kubelet[2094]: E0710 00:37:13.203040 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:16.389648 systemd[1]: Started sshd@23-10.0.0.85:22-10.0.0.1:38332.service. Jul 10 00:37:16.393482 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 10 00:37:16.393554 kernel: audit: type=1130 audit(1752107836.388:577): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.85:22-10.0.0.1:38332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:16.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.85:22-10.0.0.1:38332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:16.432000 audit[5785]: USER_ACCT pid=5785 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:16.434087 sshd[5785]: Accepted publickey for core from 10.0.0.1 port 38332 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:37:16.435835 sshd[5785]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:37:16.433000 audit[5785]: CRED_ACQ pid=5785 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:16.440257 kernel: audit: type=1101 audit(1752107836.432:578): pid=5785 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:16.440733 kernel: audit: type=1103 audit(1752107836.433:579): pid=5785 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:16.440774 kernel: audit: type=1006 audit(1752107836.434:580): pid=5785 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jul 10 00:37:16.442350 kernel: audit: type=1300 audit(1752107836.434:580): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcec87f90 a2=3 a3=1 items=0 ppid=1 pid=5785 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:37:16.434000 audit[5785]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcec87f90 a2=3 a3=1 items=0 ppid=1 pid=5785 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:37:16.443070 systemd-logind[1302]: New session 24 of user core. Jul 10 00:37:16.443593 systemd[1]: Started session-24.scope. Jul 10 00:37:16.434000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 00:37:16.446519 kernel: audit: type=1327 audit(1752107836.434:580): proctitle=737368643A20636F7265205B707269765D Jul 10 00:37:16.446000 audit[5785]: USER_START pid=5785 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:16.448000 audit[5788]: CRED_ACQ pid=5788 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:16.452460 kernel: audit: type=1105 audit(1752107836.446:581): pid=5785 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:16.455484 kernel: audit: type=1103 audit(1752107836.448:582): pid=5788 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:16.559674 sshd[5785]: pam_unix(sshd:session): session closed for user core Jul 10 00:37:16.559000 audit[5785]: USER_END pid=5785 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:16.562592 systemd-logind[1302]: Session 24 logged out. Waiting for processes to exit. Jul 10 00:37:16.562874 systemd[1]: sshd@23-10.0.0.85:22-10.0.0.1:38332.service: Deactivated successfully. Jul 10 00:37:16.563830 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 00:37:16.559000 audit[5785]: CRED_DISP pid=5785 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:16.567864 kernel: audit: type=1106 audit(1752107836.559:583): pid=5785 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:16.567945 kernel: audit: type=1104 audit(1752107836.559:584): pid=5785 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 10 00:37:16.564838 systemd-logind[1302]: Removed session 24. Jul 10 00:37:16.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.85:22-10.0.0.1:38332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'